00:00:00.001 Started by upstream project "autotest-per-patch" build number 132726 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.171 Using shallow fetch with depth 1 00:00:00.171 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.171 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.281 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.169 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.183 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.194 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.194 > git config core.sparsecheckout # timeout=10 00:00:04.205 > git read-tree -mu HEAD # timeout=10 00:00:04.220 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.242 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.242 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.337 [Pipeline] Start of Pipeline 00:00:04.356 [Pipeline] library 00:00:04.358 Loading library shm_lib@master 00:00:04.358 Library shm_lib@master is cached. Copying from home. 00:00:04.377 [Pipeline] node 00:00:04.387 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:04.389 [Pipeline] { 00:00:04.398 [Pipeline] catchError 00:00:04.400 [Pipeline] { 00:00:04.414 [Pipeline] wrap 00:00:04.422 [Pipeline] { 00:00:04.428 [Pipeline] stage 00:00:04.429 [Pipeline] { (Prologue) 00:00:04.445 [Pipeline] echo 00:00:04.446 Node: VM-host-SM9 00:00:04.451 [Pipeline] cleanWs 00:00:04.460 [WS-CLEANUP] Deleting project workspace... 00:00:04.460 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.466 [WS-CLEANUP] done 00:00:04.690 [Pipeline] setCustomBuildProperty 00:00:04.781 [Pipeline] httpRequest 00:00:05.111 [Pipeline] echo 00:00:05.112 Sorcerer 10.211.164.101 is alive 00:00:05.121 [Pipeline] retry 00:00:05.123 [Pipeline] { 00:00:05.133 [Pipeline] httpRequest 00:00:05.137 HttpMethod: GET 00:00:05.137 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.138 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.143 Response Code: HTTP/1.1 200 OK 00:00:05.144 Success: Status code 200 is in the accepted range: 200,404 00:00:05.145 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.574 [Pipeline] } 00:00:19.586 [Pipeline] // retry 00:00:19.592 [Pipeline] sh 00:00:19.864 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.154 [Pipeline] httpRequest 00:00:20.519 [Pipeline] echo 00:00:20.521 Sorcerer 10.211.164.101 is alive 00:00:20.531 [Pipeline] retry 00:00:20.533 [Pipeline] { 00:00:20.548 [Pipeline] httpRequest 00:00:20.552 HttpMethod: GET 00:00:20.553 URL: http://10.211.164.101/packages/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:00:20.554 Sending request to url: http://10.211.164.101/packages/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:00:20.559 Response Code: HTTP/1.1 200 OK 00:00:20.559 Success: Status code 200 is in the accepted range: 200,404 00:00:20.560 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:06:14.668 [Pipeline] } 00:06:14.686 [Pipeline] // retry 00:06:14.696 [Pipeline] sh 00:06:14.975 + tar --no-same-owner -xf spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:06:18.268 [Pipeline] sh 00:06:18.548 + git -C spdk log --oneline -n5 00:06:18.548 cf089b398 thread: fd_group-based interrupts 00:06:18.548 8a4656bc1 thread: move interrupt allocation to a function 00:06:18.548 09908f908 util: add method for setting fd_group's wrapper 00:06:18.548 697130caf util: multi-level fd_group nesting 00:06:18.548 6696ebaae util: keep track of nested child fd_groups 00:06:18.569 [Pipeline] writeFile 00:06:18.587 [Pipeline] sh 00:06:18.999 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:19.010 [Pipeline] sh 00:06:19.289 + cat autorun-spdk.conf 00:06:19.289 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:19.289 SPDK_TEST_NVMF=1 00:06:19.289 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:19.289 SPDK_TEST_USDT=1 00:06:19.289 SPDK_TEST_NVMF_MDNS=1 00:06:19.289 SPDK_RUN_UBSAN=1 00:06:19.289 NET_TYPE=virt 00:06:19.289 SPDK_JSONRPC_GO_CLIENT=1 00:06:19.289 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:19.295 RUN_NIGHTLY=0 00:06:19.298 [Pipeline] } 00:06:19.312 [Pipeline] // stage 00:06:19.328 [Pipeline] stage 00:06:19.331 [Pipeline] { (Run VM) 00:06:19.343 [Pipeline] sh 00:06:19.623 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:19.623 + echo 'Start stage prepare_nvme.sh' 00:06:19.623 Start stage prepare_nvme.sh 00:06:19.623 + [[ -n 2 ]] 00:06:19.623 + disk_prefix=ex2 00:06:19.623 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:06:19.623 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:06:19.623 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:06:19.623 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:19.623 ++ SPDK_TEST_NVMF=1 00:06:19.623 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:19.623 ++ SPDK_TEST_USDT=1 00:06:19.623 ++ SPDK_TEST_NVMF_MDNS=1 00:06:19.623 ++ SPDK_RUN_UBSAN=1 00:06:19.623 ++ NET_TYPE=virt 00:06:19.623 ++ SPDK_JSONRPC_GO_CLIENT=1 00:06:19.623 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:19.623 ++ RUN_NIGHTLY=0 00:06:19.623 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:06:19.623 + nvme_files=() 00:06:19.623 + declare -A nvme_files 00:06:19.623 + backend_dir=/var/lib/libvirt/images/backends 00:06:19.623 + nvme_files['nvme.img']=5G 00:06:19.623 + nvme_files['nvme-cmb.img']=5G 00:06:19.623 + nvme_files['nvme-multi0.img']=4G 00:06:19.623 + nvme_files['nvme-multi1.img']=4G 00:06:19.623 + nvme_files['nvme-multi2.img']=4G 00:06:19.623 + nvme_files['nvme-openstack.img']=8G 00:06:19.623 + nvme_files['nvme-zns.img']=5G 00:06:19.623 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:19.623 + (( SPDK_TEST_FTL == 1 )) 00:06:19.623 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:19.623 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:19.623 + for nvme in "${!nvme_files[@]}" 00:06:19.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:06:19.623 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:19.623 + for nvme in "${!nvme_files[@]}" 00:06:19.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:06:19.623 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:19.623 + for nvme in "${!nvme_files[@]}" 00:06:19.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:06:19.623 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:19.623 + for nvme in "${!nvme_files[@]}" 00:06:19.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:06:19.623 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:19.623 + for nvme in "${!nvme_files[@]}" 00:06:19.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:06:19.623 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:19.623 + for nvme in "${!nvme_files[@]}" 00:06:19.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:06:19.883 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:19.883 + for nvme in "${!nvme_files[@]}" 00:06:19.883 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:06:19.883 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:19.883 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:06:19.883 + echo 'End stage prepare_nvme.sh' 00:06:19.883 End stage prepare_nvme.sh 00:06:19.894 [Pipeline] sh 00:06:20.173 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:20.173 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:06:20.173 00:06:20.173 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:06:20.173 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:06:20.173 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:06:20.173 HELP=0 00:06:20.173 DRY_RUN=0 00:06:20.173 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:06:20.173 NVME_DISKS_TYPE=nvme,nvme, 00:06:20.173 NVME_AUTO_CREATE=0 00:06:20.173 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:06:20.173 NVME_CMB=,, 00:06:20.173 NVME_PMR=,, 00:06:20.173 NVME_ZNS=,, 00:06:20.173 NVME_MS=,, 00:06:20.173 NVME_FDP=,, 00:06:20.173 SPDK_VAGRANT_DISTRO=fedora39 00:06:20.173 SPDK_VAGRANT_VMCPU=10 00:06:20.173 SPDK_VAGRANT_VMRAM=12288 00:06:20.173 SPDK_VAGRANT_PROVIDER=libvirt 00:06:20.173 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:20.173 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:20.174 SPDK_OPENSTACK_NETWORK=0 00:06:20.174 VAGRANT_PACKAGE_BOX=0 00:06:20.174 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:06:20.174 FORCE_DISTRO=true 00:06:20.174 VAGRANT_BOX_VERSION= 00:06:20.174 EXTRA_VAGRANTFILES= 00:06:20.174 NIC_MODEL=e1000 00:06:20.174 00:06:20.174 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt' 00:06:20.174 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:06:23.460 Bringing machine 'default' up with 'libvirt' provider... 00:06:24.395 ==> default: Creating image (snapshot of base box volume). 00:06:24.395 ==> default: Creating domain with the following settings... 00:06:24.395 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733490030_de1418c859cdc13d729a 00:06:24.395 ==> default: -- Domain type: kvm 00:06:24.395 ==> default: -- Cpus: 10 00:06:24.395 ==> default: -- Feature: acpi 00:06:24.395 ==> default: -- Feature: apic 00:06:24.395 ==> default: -- Feature: pae 00:06:24.395 ==> default: -- Memory: 12288M 00:06:24.395 ==> default: -- Memory Backing: hugepages: 00:06:24.395 ==> default: -- Management MAC: 00:06:24.395 ==> default: -- Loader: 00:06:24.395 ==> default: -- Nvram: 00:06:24.395 ==> default: -- Base box: spdk/fedora39 00:06:24.395 ==> default: -- Storage pool: default 00:06:24.395 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733490030_de1418c859cdc13d729a.img (20G) 00:06:24.395 ==> default: -- Volume Cache: default 00:06:24.395 ==> default: -- Kernel: 00:06:24.395 ==> default: -- Initrd: 00:06:24.395 ==> default: -- Graphics Type: vnc 00:06:24.395 ==> default: -- Graphics Port: -1 00:06:24.395 ==> default: -- Graphics IP: 127.0.0.1 00:06:24.395 ==> default: -- Graphics Password: Not defined 00:06:24.395 ==> default: -- Video Type: cirrus 00:06:24.395 ==> default: -- Video VRAM: 9216 00:06:24.395 ==> default: -- Sound Type: 00:06:24.395 ==> default: -- Keymap: en-us 00:06:24.395 ==> default: -- TPM Path: 00:06:24.395 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:24.395 ==> default: -- Command line args: 00:06:24.395 ==> default: -> value=-device, 00:06:24.395 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:24.395 ==> default: -> value=-drive, 00:06:24.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:06:24.395 ==> default: -> value=-device, 00:06:24.395 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:24.395 ==> default: -> value=-device, 00:06:24.395 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:24.395 ==> default: -> value=-drive, 00:06:24.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:06:24.395 ==> default: -> value=-device, 00:06:24.395 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:24.395 ==> default: -> value=-drive, 00:06:24.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:06:24.395 ==> default: -> value=-device, 00:06:24.395 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:24.395 ==> default: -> value=-drive, 00:06:24.395 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:06:24.395 ==> default: -> value=-device, 00:06:24.395 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:24.395 ==> default: Creating shared folders metadata... 00:06:24.395 ==> default: Starting domain. 00:06:25.770 ==> default: Waiting for domain to get an IP address... 00:06:43.843 ==> default: Waiting for SSH to become available... 00:06:43.843 ==> default: Configuring and enabling network interfaces... 00:06:46.374 default: SSH address: 192.168.121.16:22 00:06:46.374 default: SSH username: vagrant 00:06:46.374 default: SSH auth method: private key 00:06:48.295 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:56.417 ==> default: Mounting SSHFS shared folder... 00:06:57.790 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:57.790 ==> default: Checking Mount.. 00:06:58.811 ==> default: Folder Successfully Mounted! 00:06:58.811 ==> default: Running provisioner: file... 00:06:59.745 default: ~/.gitconfig => .gitconfig 00:07:00.037 00:07:00.037 SUCCESS! 00:07:00.037 00:07:00.037 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:07:00.037 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:07:00.037 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:07:00.037 00:07:00.045 [Pipeline] } 00:07:00.058 [Pipeline] // stage 00:07:00.066 [Pipeline] dir 00:07:00.067 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt 00:07:00.068 [Pipeline] { 00:07:00.079 [Pipeline] catchError 00:07:00.081 [Pipeline] { 00:07:00.094 [Pipeline] sh 00:07:00.372 + vagrant ssh-config --host vagrant+ 00:07:00.372 sed -ne /^Host/,$p 00:07:00.372 + tee ssh_conf 00:07:04.645 Host vagrant 00:07:04.645 HostName 192.168.121.16 00:07:04.645 User vagrant 00:07:04.645 Port 22 00:07:04.645 UserKnownHostsFile /dev/null 00:07:04.645 StrictHostKeyChecking no 00:07:04.645 PasswordAuthentication no 00:07:04.645 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:07:04.645 IdentitiesOnly yes 00:07:04.645 LogLevel FATAL 00:07:04.645 ForwardAgent yes 00:07:04.645 ForwardX11 yes 00:07:04.645 00:07:04.658 [Pipeline] withEnv 00:07:04.661 [Pipeline] { 00:07:04.675 [Pipeline] sh 00:07:04.954 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:07:04.954 source /etc/os-release 00:07:04.954 [[ -e /image.version ]] && img=$(< /image.version) 00:07:04.954 # Minimal, systemd-like check. 00:07:04.954 if [[ -e /.dockerenv ]]; then 00:07:04.954 # Clear garbage from the node's name: 00:07:04.954 # agt-er_autotest_547-896 -> autotest_547-896 00:07:04.954 # $HOSTNAME is the actual container id 00:07:04.954 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:07:04.954 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:07:04.954 # We can assume this is a mount from a host where container is running, 00:07:04.954 # so fetch its hostname to easily identify the target swarm worker. 00:07:04.954 container="$(< /etc/hostname) ($agent)" 00:07:04.954 else 00:07:04.954 # Fallback 00:07:04.954 container=$agent 00:07:04.954 fi 00:07:04.954 fi 00:07:04.954 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:07:04.954 00:07:05.225 [Pipeline] } 00:07:05.242 [Pipeline] // withEnv 00:07:05.250 [Pipeline] setCustomBuildProperty 00:07:05.266 [Pipeline] stage 00:07:05.268 [Pipeline] { (Tests) 00:07:05.287 [Pipeline] sh 00:07:05.574 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:07:05.846 [Pipeline] sh 00:07:06.127 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:07:06.401 [Pipeline] timeout 00:07:06.402 Timeout set to expire in 1 hr 0 min 00:07:06.404 [Pipeline] { 00:07:06.422 [Pipeline] sh 00:07:06.704 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:07:07.272 HEAD is now at cf089b398 thread: fd_group-based interrupts 00:07:07.285 [Pipeline] sh 00:07:07.565 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:07:07.838 [Pipeline] sh 00:07:08.118 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:07:08.462 [Pipeline] sh 00:07:08.740 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:07:08.997 ++ readlink -f spdk_repo 00:07:08.997 + DIR_ROOT=/home/vagrant/spdk_repo 00:07:08.997 + [[ -n /home/vagrant/spdk_repo ]] 00:07:08.997 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:07:08.997 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:07:08.997 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:07:08.997 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:07:08.998 + [[ -d /home/vagrant/spdk_repo/output ]] 00:07:08.998 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:07:08.998 + cd /home/vagrant/spdk_repo 00:07:08.998 + source /etc/os-release 00:07:08.998 ++ NAME='Fedora Linux' 00:07:08.998 ++ VERSION='39 (Cloud Edition)' 00:07:08.998 ++ ID=fedora 00:07:08.998 ++ VERSION_ID=39 00:07:08.998 ++ VERSION_CODENAME= 00:07:08.998 ++ PLATFORM_ID=platform:f39 00:07:08.998 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:08.998 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:08.998 ++ LOGO=fedora-logo-icon 00:07:08.998 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:08.998 ++ HOME_URL=https://fedoraproject.org/ 00:07:08.998 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:08.998 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:08.998 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:08.998 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:08.998 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:08.998 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:08.998 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:08.998 ++ SUPPORT_END=2024-11-12 00:07:08.998 ++ VARIANT='Cloud Edition' 00:07:08.998 ++ VARIANT_ID=cloud 00:07:08.998 + uname -a 00:07:08.998 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:08.998 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:09.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.255 Hugepages 00:07:09.255 node hugesize free / total 00:07:09.255 node0 1048576kB 0 / 0 00:07:09.255 node0 2048kB 0 / 0 00:07:09.255 00:07:09.255 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:09.512 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:09.512 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:09.512 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:09.512 + rm -f /tmp/spdk-ld-path 00:07:09.512 + source autorun-spdk.conf 00:07:09.512 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:09.512 ++ SPDK_TEST_NVMF=1 00:07:09.512 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:09.512 ++ SPDK_TEST_USDT=1 00:07:09.512 ++ SPDK_TEST_NVMF_MDNS=1 00:07:09.512 ++ SPDK_RUN_UBSAN=1 00:07:09.512 ++ NET_TYPE=virt 00:07:09.512 ++ SPDK_JSONRPC_GO_CLIENT=1 00:07:09.512 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:09.512 ++ RUN_NIGHTLY=0 00:07:09.512 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:09.512 + [[ -n '' ]] 00:07:09.512 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:07:09.512 + for M in /var/spdk/build-*-manifest.txt 00:07:09.512 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:09.512 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:09.512 + for M in /var/spdk/build-*-manifest.txt 00:07:09.512 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:09.512 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:09.512 + for M in /var/spdk/build-*-manifest.txt 00:07:09.512 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:09.512 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:09.512 ++ uname 00:07:09.512 + [[ Linux == \L\i\n\u\x ]] 00:07:09.512 + sudo dmesg -T 00:07:09.512 + sudo dmesg --clear 00:07:09.512 + dmesg_pid=5265 00:07:09.513 + [[ Fedora Linux == FreeBSD ]] 00:07:09.513 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.513 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.513 + sudo dmesg -Tw 00:07:09.513 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:09.513 + [[ -x /usr/src/fio-static/fio ]] 00:07:09.513 + export FIO_BIN=/usr/src/fio-static/fio 00:07:09.513 + FIO_BIN=/usr/src/fio-static/fio 00:07:09.513 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:09.513 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:09.513 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:09.513 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.513 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.513 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:09.513 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.513 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.513 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:09.513 13:01:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:09.513 13:01:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:09.513 13:01:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:09.513 13:01:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:07:09.770 13:01:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:09.770 13:01:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:07:09.770 13:01:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:07:09.770 13:01:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:07:09.770 13:01:16 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:07:09.770 13:01:16 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:07:09.770 13:01:16 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:09.771 13:01:16 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:07:09.771 13:01:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:09.771 13:01:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:09.771 13:01:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:09.771 13:01:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.771 13:01:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:09.771 13:01:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:09.771 13:01:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.771 13:01:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.771 13:01:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.771 13:01:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.771 13:01:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.771 13:01:16 -- paths/export.sh@5 -- $ export PATH 00:07:09.771 13:01:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.771 13:01:16 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:07:09.771 13:01:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:09.771 13:01:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733490076.XXXXXX 00:07:09.771 13:01:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733490076.Y7RYzv 00:07:09.771 13:01:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:09.771 13:01:16 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:07:09.771 13:01:16 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:07:09.771 13:01:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:07:09.771 13:01:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:07:09.771 13:01:16 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:09.771 13:01:16 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:09.771 13:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:07:09.771 13:01:16 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:07:09.771 13:01:16 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:09.771 13:01:16 -- pm/common@17 -- $ local monitor 00:07:09.771 13:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:09.771 13:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:09.771 13:01:16 -- pm/common@21 -- $ date +%s 00:07:09.771 13:01:16 -- pm/common@25 -- $ sleep 1 00:07:09.771 13:01:16 -- pm/common@21 -- $ date +%s 00:07:09.771 13:01:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490076 00:07:09.771 13:01:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490076 00:07:09.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490076_collect-vmstat.pm.log 00:07:09.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490076_collect-cpu-load.pm.log 00:07:10.706 13:01:17 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:10.706 13:01:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:10.706 13:01:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:10.706 13:01:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:10.706 13:01:17 -- spdk/autobuild.sh@16 -- $ date -u 00:07:10.706 Fri Dec 6 01:01:17 PM UTC 2024 00:07:10.706 13:01:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:10.706 v25.01-pre-308-gcf089b398 00:07:10.706 13:01:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:10.706 13:01:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:10.706 13:01:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:10.706 13:01:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:10.706 13:01:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:10.706 13:01:17 -- common/autotest_common.sh@10 -- $ set +x 00:07:10.706 ************************************ 00:07:10.706 START TEST ubsan 00:07:10.706 ************************************ 00:07:10.706 using ubsan 00:07:10.706 13:01:17 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:10.706 00:07:10.706 real 0m0.000s 00:07:10.706 user 0m0.000s 00:07:10.706 sys 0m0.000s 00:07:10.706 13:01:17 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:10.706 13:01:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:10.706 ************************************ 00:07:10.706 END TEST ubsan 00:07:10.706 ************************************ 00:07:10.706 13:01:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:10.706 13:01:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:10.706 13:01:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:10.706 13:01:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:10.706 13:01:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:10.706 13:01:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:10.706 13:01:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:10.706 13:01:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:10.706 13:01:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:07:10.964 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:10.964 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:11.221 Using 'verbs' RDMA provider 00:07:24.371 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:36.567 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:36.567 go version go1.21.1 linux/amd64 00:07:36.824 Creating mk/config.mk...done. 00:07:36.824 Creating mk/cc.flags.mk...done. 00:07:36.824 Type 'make' to build. 00:07:36.824 13:01:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:36.824 13:01:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:36.824 13:01:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:36.824 13:01:43 -- common/autotest_common.sh@10 -- $ set +x 00:07:36.824 ************************************ 00:07:36.824 START TEST make 00:07:36.824 ************************************ 00:07:36.824 13:01:43 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:37.081 make[1]: Nothing to be done for 'all'. 00:08:03.615 The Meson build system 00:08:03.615 Version: 1.5.0 00:08:03.615 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:08:03.615 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:08:03.615 Build type: native build 00:08:03.615 Program cat found: YES (/usr/bin/cat) 00:08:03.615 Project name: DPDK 00:08:03.615 Project version: 24.03.0 00:08:03.615 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:03.615 C linker for the host machine: cc ld.bfd 2.40-14 00:08:03.615 Host machine cpu family: x86_64 00:08:03.615 Host machine cpu: x86_64 00:08:03.615 Message: ## Building in Developer Mode ## 00:08:03.615 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:03.615 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:08:03.615 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:03.615 Program python3 found: YES (/usr/bin/python3) 00:08:03.615 Program cat found: YES (/usr/bin/cat) 00:08:03.615 Compiler for C supports arguments -march=native: YES 00:08:03.615 Checking for size of "void *" : 8 00:08:03.615 Checking for size of "void *" : 8 (cached) 00:08:03.615 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:08:03.615 Library m found: YES 00:08:03.615 Library numa found: YES 00:08:03.615 Has header "numaif.h" : YES 00:08:03.615 Library fdt found: NO 00:08:03.615 Library execinfo found: NO 00:08:03.615 Has header "execinfo.h" : YES 00:08:03.615 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:03.615 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:03.615 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:03.615 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:03.615 Run-time dependency openssl found: YES 3.1.1 00:08:03.615 Run-time dependency libpcap found: YES 1.10.4 00:08:03.615 Has header "pcap.h" with dependency libpcap: YES 00:08:03.615 Compiler for C supports arguments -Wcast-qual: YES 00:08:03.615 Compiler for C supports arguments -Wdeprecated: YES 00:08:03.615 Compiler for C supports arguments -Wformat: YES 00:08:03.615 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:03.615 Compiler for C supports arguments -Wformat-security: NO 00:08:03.615 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:03.615 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:03.615 Compiler for C supports arguments -Wnested-externs: YES 00:08:03.615 Compiler for C supports arguments -Wold-style-definition: YES 00:08:03.615 Compiler for C supports arguments -Wpointer-arith: YES 00:08:03.615 Compiler for C supports arguments -Wsign-compare: YES 00:08:03.615 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:03.615 Compiler for C supports arguments -Wundef: YES 00:08:03.615 Compiler for C supports arguments -Wwrite-strings: YES 00:08:03.615 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:03.615 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:03.615 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:03.615 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:03.615 Program objdump found: YES (/usr/bin/objdump) 00:08:03.615 Compiler for C supports arguments -mavx512f: YES 00:08:03.615 Checking if "AVX512 checking" compiles: YES 00:08:03.615 Fetching value of define "__SSE4_2__" : 1 00:08:03.615 Fetching value of define "__AES__" : 1 00:08:03.615 Fetching value of define "__AVX__" : 1 00:08:03.615 Fetching value of define "__AVX2__" : 1 00:08:03.615 Fetching value of define "__AVX512BW__" : (undefined) 00:08:03.615 Fetching value of define "__AVX512CD__" : (undefined) 00:08:03.615 Fetching value of define "__AVX512DQ__" : (undefined) 00:08:03.615 Fetching value of define "__AVX512F__" : (undefined) 00:08:03.615 Fetching value of define "__AVX512VL__" : (undefined) 00:08:03.615 Fetching value of define "__PCLMUL__" : 1 00:08:03.615 Fetching value of define "__RDRND__" : 1 00:08:03.615 Fetching value of define "__RDSEED__" : 1 00:08:03.615 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:03.615 Fetching value of define "__znver1__" : (undefined) 00:08:03.615 Fetching value of define "__znver2__" : (undefined) 00:08:03.615 Fetching value of define "__znver3__" : (undefined) 00:08:03.615 Fetching value of define "__znver4__" : (undefined) 00:08:03.615 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:03.615 Message: lib/log: Defining dependency "log" 00:08:03.615 Message: lib/kvargs: Defining dependency "kvargs" 00:08:03.615 Message: lib/telemetry: Defining dependency "telemetry" 00:08:03.615 Checking for function "getentropy" : NO 00:08:03.615 Message: lib/eal: Defining dependency "eal" 00:08:03.615 Message: lib/ring: Defining dependency "ring" 00:08:03.615 Message: lib/rcu: Defining dependency "rcu" 00:08:03.615 Message: lib/mempool: Defining dependency "mempool" 00:08:03.615 Message: lib/mbuf: Defining dependency "mbuf" 00:08:03.615 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:03.615 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:03.615 Compiler for C supports arguments -mpclmul: YES 00:08:03.615 Compiler for C supports arguments -maes: YES 00:08:03.615 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:03.615 Compiler for C supports arguments -mavx512bw: YES 00:08:03.615 Compiler for C supports arguments -mavx512dq: YES 00:08:03.615 Compiler for C supports arguments -mavx512vl: YES 00:08:03.615 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:03.615 Compiler for C supports arguments -mavx2: YES 00:08:03.615 Compiler for C supports arguments -mavx: YES 00:08:03.615 Message: lib/net: Defining dependency "net" 00:08:03.615 Message: lib/meter: Defining dependency "meter" 00:08:03.615 Message: lib/ethdev: Defining dependency "ethdev" 00:08:03.615 Message: lib/pci: Defining dependency "pci" 00:08:03.615 Message: lib/cmdline: Defining dependency "cmdline" 00:08:03.615 Message: lib/hash: Defining dependency "hash" 00:08:03.615 Message: lib/timer: Defining dependency "timer" 00:08:03.615 Message: lib/compressdev: Defining dependency "compressdev" 00:08:03.615 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:03.615 Message: lib/dmadev: Defining dependency "dmadev" 00:08:03.615 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:03.615 Message: lib/power: Defining dependency "power" 00:08:03.615 Message: lib/reorder: Defining dependency "reorder" 00:08:03.615 Message: lib/security: Defining dependency "security" 00:08:03.615 Has header "linux/userfaultfd.h" : YES 00:08:03.615 Has header "linux/vduse.h" : YES 00:08:03.615 Message: lib/vhost: Defining dependency "vhost" 00:08:03.615 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:03.615 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:03.615 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:03.615 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:03.615 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:03.615 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:03.615 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:03.615 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:03.615 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:03.615 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:03.615 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:03.615 Configuring doxy-api-html.conf using configuration 00:08:03.615 Configuring doxy-api-man.conf using configuration 00:08:03.615 Program mandb found: YES (/usr/bin/mandb) 00:08:03.616 Program sphinx-build found: NO 00:08:03.616 Configuring rte_build_config.h using configuration 00:08:03.616 Message: 00:08:03.616 ================= 00:08:03.616 Applications Enabled 00:08:03.616 ================= 00:08:03.616 00:08:03.616 apps: 00:08:03.616 00:08:03.616 00:08:03.616 Message: 00:08:03.616 ================= 00:08:03.616 Libraries Enabled 00:08:03.616 ================= 00:08:03.616 00:08:03.616 libs: 00:08:03.616 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:03.616 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:03.616 cryptodev, dmadev, power, reorder, security, vhost, 00:08:03.616 00:08:03.616 Message: 00:08:03.616 =============== 00:08:03.616 Drivers Enabled 00:08:03.616 =============== 00:08:03.616 00:08:03.616 common: 00:08:03.616 00:08:03.616 bus: 00:08:03.616 pci, vdev, 00:08:03.616 mempool: 00:08:03.616 ring, 00:08:03.616 dma: 00:08:03.616 00:08:03.616 net: 00:08:03.616 00:08:03.616 crypto: 00:08:03.616 00:08:03.616 compress: 00:08:03.616 00:08:03.616 vdpa: 00:08:03.616 00:08:03.616 00:08:03.616 Message: 00:08:03.616 ================= 00:08:03.616 Content Skipped 00:08:03.616 ================= 00:08:03.616 00:08:03.616 apps: 00:08:03.616 dumpcap: explicitly disabled via build config 00:08:03.616 graph: explicitly disabled via build config 00:08:03.616 pdump: explicitly disabled via build config 00:08:03.616 proc-info: explicitly disabled via build config 00:08:03.616 test-acl: explicitly disabled via build config 00:08:03.616 test-bbdev: explicitly disabled via build config 00:08:03.616 test-cmdline: explicitly disabled via build config 00:08:03.616 test-compress-perf: explicitly disabled via build config 00:08:03.616 test-crypto-perf: explicitly disabled via build config 00:08:03.616 test-dma-perf: explicitly disabled via build config 00:08:03.616 test-eventdev: explicitly disabled via build config 00:08:03.616 test-fib: explicitly disabled via build config 00:08:03.616 test-flow-perf: explicitly disabled via build config 00:08:03.616 test-gpudev: explicitly disabled via build config 00:08:03.616 test-mldev: explicitly disabled via build config 00:08:03.616 test-pipeline: explicitly disabled via build config 00:08:03.616 test-pmd: explicitly disabled via build config 00:08:03.616 test-regex: explicitly disabled via build config 00:08:03.616 test-sad: explicitly disabled via build config 00:08:03.616 test-security-perf: explicitly disabled via build config 00:08:03.616 00:08:03.616 libs: 00:08:03.616 argparse: explicitly disabled via build config 00:08:03.616 metrics: explicitly disabled via build config 00:08:03.616 acl: explicitly disabled via build config 00:08:03.616 bbdev: explicitly disabled via build config 00:08:03.616 bitratestats: explicitly disabled via build config 00:08:03.616 bpf: explicitly disabled via build config 00:08:03.616 cfgfile: explicitly disabled via build config 00:08:03.616 distributor: explicitly disabled via build config 00:08:03.616 efd: explicitly disabled via build config 00:08:03.616 eventdev: explicitly disabled via build config 00:08:03.616 dispatcher: explicitly disabled via build config 00:08:03.616 gpudev: explicitly disabled via build config 00:08:03.616 gro: explicitly disabled via build config 00:08:03.616 gso: explicitly disabled via build config 00:08:03.616 ip_frag: explicitly disabled via build config 00:08:03.616 jobstats: explicitly disabled via build config 00:08:03.616 latencystats: explicitly disabled via build config 00:08:03.616 lpm: explicitly disabled via build config 00:08:03.616 member: explicitly disabled via build config 00:08:03.616 pcapng: explicitly disabled via build config 00:08:03.616 rawdev: explicitly disabled via build config 00:08:03.616 regexdev: explicitly disabled via build config 00:08:03.616 mldev: explicitly disabled via build config 00:08:03.616 rib: explicitly disabled via build config 00:08:03.616 sched: explicitly disabled via build config 00:08:03.616 stack: explicitly disabled via build config 00:08:03.616 ipsec: explicitly disabled via build config 00:08:03.616 pdcp: explicitly disabled via build config 00:08:03.616 fib: explicitly disabled via build config 00:08:03.616 port: explicitly disabled via build config 00:08:03.616 pdump: explicitly disabled via build config 00:08:03.616 table: explicitly disabled via build config 00:08:03.616 pipeline: explicitly disabled via build config 00:08:03.616 graph: explicitly disabled via build config 00:08:03.616 node: explicitly disabled via build config 00:08:03.616 00:08:03.616 drivers: 00:08:03.616 common/cpt: not in enabled drivers build config 00:08:03.616 common/dpaax: not in enabled drivers build config 00:08:03.616 common/iavf: not in enabled drivers build config 00:08:03.616 common/idpf: not in enabled drivers build config 00:08:03.616 common/ionic: not in enabled drivers build config 00:08:03.616 common/mvep: not in enabled drivers build config 00:08:03.616 common/octeontx: not in enabled drivers build config 00:08:03.616 bus/auxiliary: not in enabled drivers build config 00:08:03.616 bus/cdx: not in enabled drivers build config 00:08:03.616 bus/dpaa: not in enabled drivers build config 00:08:03.616 bus/fslmc: not in enabled drivers build config 00:08:03.616 bus/ifpga: not in enabled drivers build config 00:08:03.616 bus/platform: not in enabled drivers build config 00:08:03.616 bus/uacce: not in enabled drivers build config 00:08:03.616 bus/vmbus: not in enabled drivers build config 00:08:03.616 common/cnxk: not in enabled drivers build config 00:08:03.616 common/mlx5: not in enabled drivers build config 00:08:03.616 common/nfp: not in enabled drivers build config 00:08:03.616 common/nitrox: not in enabled drivers build config 00:08:03.616 common/qat: not in enabled drivers build config 00:08:03.616 common/sfc_efx: not in enabled drivers build config 00:08:03.616 mempool/bucket: not in enabled drivers build config 00:08:03.616 mempool/cnxk: not in enabled drivers build config 00:08:03.616 mempool/dpaa: not in enabled drivers build config 00:08:03.616 mempool/dpaa2: not in enabled drivers build config 00:08:03.616 mempool/octeontx: not in enabled drivers build config 00:08:03.616 mempool/stack: not in enabled drivers build config 00:08:03.616 dma/cnxk: not in enabled drivers build config 00:08:03.616 dma/dpaa: not in enabled drivers build config 00:08:03.616 dma/dpaa2: not in enabled drivers build config 00:08:03.616 dma/hisilicon: not in enabled drivers build config 00:08:03.616 dma/idxd: not in enabled drivers build config 00:08:03.616 dma/ioat: not in enabled drivers build config 00:08:03.616 dma/skeleton: not in enabled drivers build config 00:08:03.616 net/af_packet: not in enabled drivers build config 00:08:03.616 net/af_xdp: not in enabled drivers build config 00:08:03.616 net/ark: not in enabled drivers build config 00:08:03.616 net/atlantic: not in enabled drivers build config 00:08:03.616 net/avp: not in enabled drivers build config 00:08:03.616 net/axgbe: not in enabled drivers build config 00:08:03.616 net/bnx2x: not in enabled drivers build config 00:08:03.616 net/bnxt: not in enabled drivers build config 00:08:03.616 net/bonding: not in enabled drivers build config 00:08:03.616 net/cnxk: not in enabled drivers build config 00:08:03.616 net/cpfl: not in enabled drivers build config 00:08:03.616 net/cxgbe: not in enabled drivers build config 00:08:03.616 net/dpaa: not in enabled drivers build config 00:08:03.616 net/dpaa2: not in enabled drivers build config 00:08:03.616 net/e1000: not in enabled drivers build config 00:08:03.616 net/ena: not in enabled drivers build config 00:08:03.616 net/enetc: not in enabled drivers build config 00:08:03.616 net/enetfec: not in enabled drivers build config 00:08:03.616 net/enic: not in enabled drivers build config 00:08:03.616 net/failsafe: not in enabled drivers build config 00:08:03.616 net/fm10k: not in enabled drivers build config 00:08:03.616 net/gve: not in enabled drivers build config 00:08:03.616 net/hinic: not in enabled drivers build config 00:08:03.616 net/hns3: not in enabled drivers build config 00:08:03.616 net/i40e: not in enabled drivers build config 00:08:03.616 net/iavf: not in enabled drivers build config 00:08:03.616 net/ice: not in enabled drivers build config 00:08:03.616 net/idpf: not in enabled drivers build config 00:08:03.616 net/igc: not in enabled drivers build config 00:08:03.616 net/ionic: not in enabled drivers build config 00:08:03.616 net/ipn3ke: not in enabled drivers build config 00:08:03.616 net/ixgbe: not in enabled drivers build config 00:08:03.616 net/mana: not in enabled drivers build config 00:08:03.616 net/memif: not in enabled drivers build config 00:08:03.616 net/mlx4: not in enabled drivers build config 00:08:03.616 net/mlx5: not in enabled drivers build config 00:08:03.616 net/mvneta: not in enabled drivers build config 00:08:03.616 net/mvpp2: not in enabled drivers build config 00:08:03.616 net/netvsc: not in enabled drivers build config 00:08:03.616 net/nfb: not in enabled drivers build config 00:08:03.616 net/nfp: not in enabled drivers build config 00:08:03.616 net/ngbe: not in enabled drivers build config 00:08:03.616 net/null: not in enabled drivers build config 00:08:03.616 net/octeontx: not in enabled drivers build config 00:08:03.616 net/octeon_ep: not in enabled drivers build config 00:08:03.616 net/pcap: not in enabled drivers build config 00:08:03.616 net/pfe: not in enabled drivers build config 00:08:03.616 net/qede: not in enabled drivers build config 00:08:03.616 net/ring: not in enabled drivers build config 00:08:03.616 net/sfc: not in enabled drivers build config 00:08:03.616 net/softnic: not in enabled drivers build config 00:08:03.616 net/tap: not in enabled drivers build config 00:08:03.616 net/thunderx: not in enabled drivers build config 00:08:03.616 net/txgbe: not in enabled drivers build config 00:08:03.617 net/vdev_netvsc: not in enabled drivers build config 00:08:03.617 net/vhost: not in enabled drivers build config 00:08:03.617 net/virtio: not in enabled drivers build config 00:08:03.617 net/vmxnet3: not in enabled drivers build config 00:08:03.617 raw/*: missing internal dependency, "rawdev" 00:08:03.617 crypto/armv8: not in enabled drivers build config 00:08:03.617 crypto/bcmfs: not in enabled drivers build config 00:08:03.617 crypto/caam_jr: not in enabled drivers build config 00:08:03.617 crypto/ccp: not in enabled drivers build config 00:08:03.617 crypto/cnxk: not in enabled drivers build config 00:08:03.617 crypto/dpaa_sec: not in enabled drivers build config 00:08:03.617 crypto/dpaa2_sec: not in enabled drivers build config 00:08:03.617 crypto/ipsec_mb: not in enabled drivers build config 00:08:03.617 crypto/mlx5: not in enabled drivers build config 00:08:03.617 crypto/mvsam: not in enabled drivers build config 00:08:03.617 crypto/nitrox: not in enabled drivers build config 00:08:03.617 crypto/null: not in enabled drivers build config 00:08:03.617 crypto/octeontx: not in enabled drivers build config 00:08:03.617 crypto/openssl: not in enabled drivers build config 00:08:03.617 crypto/scheduler: not in enabled drivers build config 00:08:03.617 crypto/uadk: not in enabled drivers build config 00:08:03.617 crypto/virtio: not in enabled drivers build config 00:08:03.617 compress/isal: not in enabled drivers build config 00:08:03.617 compress/mlx5: not in enabled drivers build config 00:08:03.617 compress/nitrox: not in enabled drivers build config 00:08:03.617 compress/octeontx: not in enabled drivers build config 00:08:03.617 compress/zlib: not in enabled drivers build config 00:08:03.617 regex/*: missing internal dependency, "regexdev" 00:08:03.617 ml/*: missing internal dependency, "mldev" 00:08:03.617 vdpa/ifc: not in enabled drivers build config 00:08:03.617 vdpa/mlx5: not in enabled drivers build config 00:08:03.617 vdpa/nfp: not in enabled drivers build config 00:08:03.617 vdpa/sfc: not in enabled drivers build config 00:08:03.617 event/*: missing internal dependency, "eventdev" 00:08:03.617 baseband/*: missing internal dependency, "bbdev" 00:08:03.617 gpu/*: missing internal dependency, "gpudev" 00:08:03.617 00:08:03.617 00:08:03.617 Build targets in project: 85 00:08:03.617 00:08:03.617 DPDK 24.03.0 00:08:03.617 00:08:03.617 User defined options 00:08:03.617 buildtype : debug 00:08:03.617 default_library : shared 00:08:03.617 libdir : lib 00:08:03.617 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:03.617 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:03.617 c_link_args : 00:08:03.617 cpu_instruction_set: native 00:08:03.617 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:08:03.617 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:08:03.617 enable_docs : false 00:08:03.617 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:08:03.617 enable_kmods : false 00:08:03.617 max_lcores : 128 00:08:03.617 tests : false 00:08:03.617 00:08:03.617 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:03.617 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:08:03.617 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:03.617 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:03.617 [3/268] Linking static target lib/librte_log.a 00:08:03.617 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:03.617 [5/268] Linking static target lib/librte_kvargs.a 00:08:03.617 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:03.617 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:03.876 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:04.135 [9/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:04.135 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:04.135 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:04.395 [12/268] Linking target lib/librte_log.so.24.1 00:08:04.395 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:04.395 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:04.395 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:04.395 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:04.687 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:04.687 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:04.687 [19/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:04.687 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:04.687 [21/268] Linking static target lib/librte_telemetry.a 00:08:04.687 [22/268] Linking target lib/librte_kvargs.so.24.1 00:08:05.254 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:05.822 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:05.822 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:05.822 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:05.822 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:05.822 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:05.822 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:06.081 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:06.081 [31/268] Linking target lib/librte_telemetry.so.24.1 00:08:06.081 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:06.081 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:06.339 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:06.339 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:06.339 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:06.600 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:06.858 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:07.116 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:07.116 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:07.375 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:07.375 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:07.375 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:07.633 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:07.633 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:07.633 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:07.891 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:07.892 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:08.149 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:08.149 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:08.408 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:08.408 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:09.341 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:09.342 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:09.342 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:09.342 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:09.342 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:09.342 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:09.599 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:09.599 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:09.599 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:09.599 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:09.856 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:10.790 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:10.790 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:10.790 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:11.047 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:11.047 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:11.304 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:11.304 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:11.304 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:11.304 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:11.561 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:11.561 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:11.561 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:11.561 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:12.128 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:12.128 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:12.694 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:12.694 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:12.694 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:13.006 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:13.006 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:13.006 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:13.280 [85/268] Linking static target lib/librte_eal.a 00:08:13.280 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:13.280 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:13.280 [88/268] Linking static target lib/librte_ring.a 00:08:13.280 [89/268] Linking static target lib/librte_rcu.a 00:08:13.845 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:13.845 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:13.845 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:14.102 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:14.102 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:14.102 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:14.102 [96/268] Linking static target lib/librte_mempool.a 00:08:14.102 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:14.360 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:14.618 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:14.618 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:14.878 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:14.878 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:15.156 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:15.156 [104/268] Linking static target lib/librte_mbuf.a 00:08:15.156 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:15.424 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:15.424 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:15.683 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:15.683 [109/268] Linking static target lib/librte_net.a 00:08:15.683 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:15.683 [111/268] Linking static target lib/librte_meter.a 00:08:15.941 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.208 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.208 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:16.477 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.477 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:16.477 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:16.477 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:16.752 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.713 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:17.713 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:17.713 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:17.984 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:17.984 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:18.287 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:18.287 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:18.287 [127/268] Linking static target lib/librte_pci.a 00:08:18.287 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:18.287 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:18.287 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:18.568 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:18.840 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:18.840 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:18.840 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:19.098 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:19.098 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:19.098 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:19.098 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:19.098 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:19.098 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:19.098 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:19.356 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:19.356 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:19.356 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:19.356 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:19.615 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:19.615 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:19.615 [148/268] Linking static target lib/librte_cmdline.a 00:08:20.183 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:20.183 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:20.442 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:20.442 [152/268] Linking static target lib/librte_timer.a 00:08:20.442 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:21.032 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:21.032 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:21.032 [156/268] Linking static target lib/librte_ethdev.a 00:08:21.032 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:21.332 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:21.332 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.604 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:21.604 [161/268] Linking static target lib/librte_hash.a 00:08:21.604 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:21.604 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:21.604 [164/268] Linking static target lib/librte_compressdev.a 00:08:22.173 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:22.173 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.173 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:22.431 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:22.431 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:22.431 [170/268] Linking static target lib/librte_dmadev.a 00:08:22.431 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:22.708 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:23.281 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.281 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:23.281 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:23.281 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:23.281 [177/268] Linking static target lib/librte_cryptodev.a 00:08:23.539 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.539 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:23.814 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:23.814 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.101 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:24.101 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:24.388 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:24.653 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:24.653 [186/268] Linking static target lib/librte_power.a 00:08:24.911 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:24.911 [188/268] Linking static target lib/librte_reorder.a 00:08:25.170 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:25.170 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:25.170 [191/268] Linking static target lib/librte_security.a 00:08:25.427 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:25.993 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.993 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:25.993 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:26.251 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.251 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.509 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:26.766 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.766 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:27.330 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:27.330 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:27.593 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:27.593 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:27.593 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:27.874 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:28.133 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:28.390 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:28.390 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:28.390 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:28.390 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:28.956 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:28.956 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:28.956 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:28.956 [215/268] Linking static target drivers/librte_bus_pci.a 00:08:28.956 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:28.956 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:28.956 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:28.956 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:28.956 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:28.956 [221/268] Linking static target drivers/librte_bus_vdev.a 00:08:28.956 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:29.213 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:29.213 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:29.213 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.213 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:29.472 [227/268] Linking static target drivers/librte_mempool_ring.a 00:08:29.472 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.730 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.730 [230/268] Linking target lib/librte_eal.so.24.1 00:08:29.987 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:29.987 [232/268] Linking target lib/librte_pci.so.24.1 00:08:29.987 [233/268] Linking target lib/librte_meter.so.24.1 00:08:29.987 [234/268] Linking target lib/librte_timer.so.24.1 00:08:29.987 [235/268] Linking target lib/librte_ring.so.24.1 00:08:29.987 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:30.247 [237/268] Linking target lib/librte_dmadev.so.24.1 00:08:30.247 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:30.247 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:30.247 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:30.247 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:30.247 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:30.247 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:30.247 [244/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:30.247 [245/268] Linking target lib/librte_mempool.so.24.1 00:08:30.247 [246/268] Linking target lib/librte_rcu.so.24.1 00:08:30.506 [247/268] Linking static target lib/librte_vhost.a 00:08:30.506 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:30.506 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:30.506 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:30.764 [251/268] Linking target lib/librte_mbuf.so.24.1 00:08:30.764 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:30.764 [253/268] Linking target lib/librte_reorder.so.24.1 00:08:30.764 [254/268] Linking target lib/librte_net.so.24.1 00:08:30.764 [255/268] Linking target lib/librte_compressdev.so.24.1 00:08:30.764 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:08:31.042 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:31.042 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:31.303 [259/268] Linking target lib/librte_cmdline.so.24.1 00:08:31.303 [260/268] Linking target lib/librte_hash.so.24.1 00:08:31.303 [261/268] Linking target lib/librte_security.so.24.1 00:08:31.564 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:32.133 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.134 [264/268] Linking target lib/librte_ethdev.so.24.1 00:08:32.134 [265/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.393 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:32.393 [267/268] Linking target lib/librte_power.so.24.1 00:08:32.393 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:32.393 INFO: autodetecting backend as ninja 00:08:32.393 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:11.095 CC lib/ut/ut.o 00:09:11.095 CC lib/ut_mock/mock.o 00:09:11.095 CC lib/log/log.o 00:09:11.095 CC lib/log/log_flags.o 00:09:11.095 CC lib/log/log_deprecated.o 00:09:11.095 LIB libspdk_ut.a 00:09:11.095 LIB libspdk_ut_mock.a 00:09:11.095 SO libspdk_ut.so.2.0 00:09:11.095 SO libspdk_ut_mock.so.6.0 00:09:11.095 SYMLINK libspdk_ut.so 00:09:11.095 LIB libspdk_log.a 00:09:11.095 SYMLINK libspdk_ut_mock.so 00:09:11.095 SO libspdk_log.so.7.1 00:09:11.095 SYMLINK libspdk_log.so 00:09:11.095 CC lib/dma/dma.o 00:09:11.095 CC lib/util/base64.o 00:09:11.095 CC lib/util/bit_array.o 00:09:11.095 CC lib/util/crc16.o 00:09:11.095 CC lib/util/cpuset.o 00:09:11.095 CC lib/util/crc32.o 00:09:11.095 CC lib/util/crc32c.o 00:09:11.095 CXX lib/trace_parser/trace.o 00:09:11.095 CC lib/ioat/ioat.o 00:09:11.095 CC lib/vfio_user/host/vfio_user_pci.o 00:09:11.095 CC lib/util/crc32_ieee.o 00:09:11.095 CC lib/util/crc64.o 00:09:11.095 CC lib/util/dif.o 00:09:11.095 CC lib/util/fd.o 00:09:11.095 LIB libspdk_dma.a 00:09:11.095 CC lib/util/fd_group.o 00:09:11.095 SO libspdk_dma.so.5.0 00:09:11.095 CC lib/util/file.o 00:09:11.095 CC lib/vfio_user/host/vfio_user.o 00:09:11.095 SYMLINK libspdk_dma.so 00:09:11.095 CC lib/util/hexlify.o 00:09:11.095 LIB libspdk_ioat.a 00:09:11.095 CC lib/util/iov.o 00:09:11.095 SO libspdk_ioat.so.7.0 00:09:11.095 CC lib/util/math.o 00:09:11.095 CC lib/util/net.o 00:09:11.095 SYMLINK libspdk_ioat.so 00:09:11.095 CC lib/util/pipe.o 00:09:11.095 CC lib/util/strerror_tls.o 00:09:11.095 LIB libspdk_vfio_user.a 00:09:11.095 CC lib/util/string.o 00:09:11.095 CC lib/util/uuid.o 00:09:11.095 SO libspdk_vfio_user.so.5.0 00:09:11.095 CC lib/util/xor.o 00:09:11.095 SYMLINK libspdk_vfio_user.so 00:09:11.095 CC lib/util/zipf.o 00:09:11.095 CC lib/util/md5.o 00:09:11.095 LIB libspdk_util.a 00:09:11.095 LIB libspdk_trace_parser.a 00:09:11.095 SO libspdk_util.so.10.1 00:09:11.095 SO libspdk_trace_parser.so.6.0 00:09:11.095 SYMLINK libspdk_trace_parser.so 00:09:11.095 SYMLINK libspdk_util.so 00:09:11.095 CC lib/rdma_utils/rdma_utils.o 00:09:11.095 CC lib/json/json_parse.o 00:09:11.095 CC lib/conf/conf.o 00:09:11.095 CC lib/json/json_util.o 00:09:11.095 CC lib/json/json_write.o 00:09:11.095 CC lib/idxd/idxd.o 00:09:11.095 CC lib/idxd/idxd_user.o 00:09:11.095 CC lib/vmd/vmd.o 00:09:11.095 CC lib/idxd/idxd_kernel.o 00:09:11.095 CC lib/env_dpdk/env.o 00:09:11.095 CC lib/env_dpdk/memory.o 00:09:11.095 LIB libspdk_conf.a 00:09:11.095 LIB libspdk_rdma_utils.a 00:09:11.095 CC lib/env_dpdk/pci.o 00:09:11.095 CC lib/vmd/led.o 00:09:11.095 SO libspdk_conf.so.6.0 00:09:11.095 CC lib/env_dpdk/init.o 00:09:11.095 SO libspdk_rdma_utils.so.1.0 00:09:11.095 LIB libspdk_json.a 00:09:11.095 SYMLINK libspdk_conf.so 00:09:11.095 CC lib/env_dpdk/threads.o 00:09:11.095 SO libspdk_json.so.6.0 00:09:11.095 SYMLINK libspdk_rdma_utils.so 00:09:11.095 CC lib/env_dpdk/pci_ioat.o 00:09:11.095 SYMLINK libspdk_json.so 00:09:11.095 CC lib/env_dpdk/pci_virtio.o 00:09:11.095 CC lib/env_dpdk/pci_vmd.o 00:09:11.095 CC lib/rdma_provider/common.o 00:09:11.095 CC lib/jsonrpc/jsonrpc_server.o 00:09:11.354 LIB libspdk_idxd.a 00:09:11.354 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:11.354 CC lib/env_dpdk/pci_idxd.o 00:09:11.354 CC lib/jsonrpc/jsonrpc_client.o 00:09:11.354 CC lib/env_dpdk/pci_event.o 00:09:11.354 SO libspdk_idxd.so.12.1 00:09:11.354 LIB libspdk_vmd.a 00:09:11.354 SO libspdk_vmd.so.6.0 00:09:11.354 SYMLINK libspdk_idxd.so 00:09:11.354 CC lib/env_dpdk/sigbus_handler.o 00:09:11.613 SYMLINK libspdk_vmd.so 00:09:11.613 CC lib/env_dpdk/pci_dpdk.o 00:09:11.613 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:11.613 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:11.613 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:11.613 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:11.871 LIB libspdk_rdma_provider.a 00:09:11.871 SO libspdk_rdma_provider.so.7.0 00:09:11.871 LIB libspdk_jsonrpc.a 00:09:11.871 SYMLINK libspdk_rdma_provider.so 00:09:12.130 SO libspdk_jsonrpc.so.6.0 00:09:12.130 SYMLINK libspdk_jsonrpc.so 00:09:12.388 LIB libspdk_env_dpdk.a 00:09:12.388 CC lib/rpc/rpc.o 00:09:12.388 SO libspdk_env_dpdk.so.15.1 00:09:12.646 LIB libspdk_rpc.a 00:09:12.646 SO libspdk_rpc.so.6.0 00:09:12.646 SYMLINK libspdk_env_dpdk.so 00:09:12.646 SYMLINK libspdk_rpc.so 00:09:12.904 CC lib/trace/trace.o 00:09:12.904 CC lib/trace/trace_flags.o 00:09:12.904 CC lib/trace/trace_rpc.o 00:09:12.904 CC lib/keyring/keyring.o 00:09:12.904 CC lib/keyring/keyring_rpc.o 00:09:12.904 CC lib/notify/notify.o 00:09:12.904 CC lib/notify/notify_rpc.o 00:09:13.168 LIB libspdk_notify.a 00:09:13.429 SO libspdk_notify.so.6.0 00:09:13.429 LIB libspdk_keyring.a 00:09:13.429 SYMLINK libspdk_notify.so 00:09:13.429 LIB libspdk_trace.a 00:09:13.429 SO libspdk_keyring.so.2.0 00:09:13.429 SO libspdk_trace.so.11.0 00:09:13.429 SYMLINK libspdk_keyring.so 00:09:13.693 SYMLINK libspdk_trace.so 00:09:13.953 CC lib/thread/thread.o 00:09:13.953 CC lib/thread/iobuf.o 00:09:13.953 CC lib/sock/sock.o 00:09:13.953 CC lib/sock/sock_rpc.o 00:09:14.518 LIB libspdk_sock.a 00:09:14.518 SO libspdk_sock.so.10.0 00:09:14.777 SYMLINK libspdk_sock.so 00:09:15.033 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:15.033 CC lib/nvme/nvme_ctrlr.o 00:09:15.033 CC lib/nvme/nvme_fabric.o 00:09:15.033 CC lib/nvme/nvme_ns.o 00:09:15.033 CC lib/nvme/nvme_pcie.o 00:09:15.033 CC lib/nvme/nvme_ns_cmd.o 00:09:15.033 CC lib/nvme/nvme_pcie_common.o 00:09:15.033 CC lib/nvme/nvme_qpair.o 00:09:15.033 CC lib/nvme/nvme.o 00:09:15.965 CC lib/nvme/nvme_quirks.o 00:09:15.965 LIB libspdk_thread.a 00:09:16.222 SO libspdk_thread.so.11.0 00:09:16.222 CC lib/nvme/nvme_transport.o 00:09:16.222 SYMLINK libspdk_thread.so 00:09:16.222 CC lib/nvme/nvme_discovery.o 00:09:16.222 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:16.480 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:16.480 CC lib/accel/accel.o 00:09:16.738 CC lib/blob/blobstore.o 00:09:16.738 CC lib/blob/request.o 00:09:16.738 CC lib/blob/zeroes.o 00:09:16.738 CC lib/init/json_config.o 00:09:17.003 CC lib/blob/blob_bs_dev.o 00:09:17.003 CC lib/nvme/nvme_tcp.o 00:09:17.003 CC lib/nvme/nvme_opal.o 00:09:17.003 CC lib/nvme/nvme_io_msg.o 00:09:17.261 CC lib/init/subsystem.o 00:09:17.261 CC lib/init/subsystem_rpc.o 00:09:17.519 CC lib/init/rpc.o 00:09:17.519 CC lib/virtio/virtio.o 00:09:17.519 CC lib/nvme/nvme_poll_group.o 00:09:17.519 CC lib/nvme/nvme_zns.o 00:09:17.776 LIB libspdk_init.a 00:09:17.776 CC lib/virtio/virtio_vhost_user.o 00:09:17.776 SO libspdk_init.so.6.0 00:09:17.776 SYMLINK libspdk_init.so 00:09:17.776 CC lib/virtio/virtio_vfio_user.o 00:09:17.776 CC lib/virtio/virtio_pci.o 00:09:17.776 CC lib/accel/accel_rpc.o 00:09:18.033 CC lib/accel/accel_sw.o 00:09:18.033 CC lib/nvme/nvme_stubs.o 00:09:18.033 CC lib/nvme/nvme_auth.o 00:09:18.290 LIB libspdk_virtio.a 00:09:18.290 CC lib/fsdev/fsdev.o 00:09:18.290 SO libspdk_virtio.so.7.0 00:09:18.290 SYMLINK libspdk_virtio.so 00:09:18.290 CC lib/nvme/nvme_cuse.o 00:09:18.290 CC lib/event/app.o 00:09:18.290 CC lib/nvme/nvme_rdma.o 00:09:18.290 LIB libspdk_accel.a 00:09:18.290 SO libspdk_accel.so.16.0 00:09:18.547 CC lib/fsdev/fsdev_io.o 00:09:18.547 SYMLINK libspdk_accel.so 00:09:18.547 CC lib/fsdev/fsdev_rpc.o 00:09:18.547 CC lib/event/reactor.o 00:09:18.804 CC lib/event/log_rpc.o 00:09:18.804 CC lib/event/app_rpc.o 00:09:18.804 LIB libspdk_fsdev.a 00:09:18.804 CC lib/event/scheduler_static.o 00:09:18.804 SO libspdk_fsdev.so.2.0 00:09:19.061 CC lib/bdev/bdev.o 00:09:19.062 SYMLINK libspdk_fsdev.so 00:09:19.062 CC lib/bdev/bdev_rpc.o 00:09:19.062 CC lib/bdev/bdev_zone.o 00:09:19.062 CC lib/bdev/part.o 00:09:19.062 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:19.320 LIB libspdk_event.a 00:09:19.320 CC lib/bdev/scsi_nvme.o 00:09:19.320 SO libspdk_event.so.14.0 00:09:19.320 SYMLINK libspdk_event.so 00:09:19.886 LIB libspdk_fuse_dispatcher.a 00:09:19.886 LIB libspdk_nvme.a 00:09:19.886 SO libspdk_fuse_dispatcher.so.1.0 00:09:19.886 SYMLINK libspdk_fuse_dispatcher.so 00:09:20.143 SO libspdk_nvme.so.15.0 00:09:20.401 LIB libspdk_blob.a 00:09:20.401 SO libspdk_blob.so.12.0 00:09:20.401 SYMLINK libspdk_nvme.so 00:09:20.682 SYMLINK libspdk_blob.so 00:09:20.941 CC lib/blobfs/blobfs.o 00:09:20.941 CC lib/blobfs/tree.o 00:09:20.941 CC lib/lvol/lvol.o 00:09:21.875 LIB libspdk_blobfs.a 00:09:21.875 SO libspdk_blobfs.so.11.0 00:09:21.875 SYMLINK libspdk_blobfs.so 00:09:22.133 LIB libspdk_lvol.a 00:09:22.393 SO libspdk_lvol.so.11.0 00:09:22.393 SYMLINK libspdk_lvol.so 00:09:22.661 LIB libspdk_bdev.a 00:09:22.919 SO libspdk_bdev.so.17.0 00:09:22.919 SYMLINK libspdk_bdev.so 00:09:23.177 CC lib/scsi/dev.o 00:09:23.177 CC lib/scsi/lun.o 00:09:23.177 CC lib/ublk/ublk.o 00:09:23.177 CC lib/scsi/port.o 00:09:23.177 CC lib/scsi/scsi.o 00:09:23.177 CC lib/ublk/ublk_rpc.o 00:09:23.177 CC lib/scsi/scsi_bdev.o 00:09:23.177 CC lib/nbd/nbd.o 00:09:23.177 CC lib/nvmf/ctrlr.o 00:09:23.177 CC lib/ftl/ftl_core.o 00:09:23.435 CC lib/ftl/ftl_init.o 00:09:23.435 CC lib/ftl/ftl_layout.o 00:09:23.435 CC lib/ftl/ftl_debug.o 00:09:23.693 CC lib/nvmf/ctrlr_discovery.o 00:09:23.693 CC lib/nbd/nbd_rpc.o 00:09:23.693 CC lib/nvmf/ctrlr_bdev.o 00:09:23.951 CC lib/ftl/ftl_io.o 00:09:23.951 CC lib/ftl/ftl_sb.o 00:09:23.951 CC lib/ftl/ftl_l2p.o 00:09:23.951 CC lib/ftl/ftl_l2p_flat.o 00:09:23.951 CC lib/scsi/scsi_pr.o 00:09:23.951 LIB libspdk_nbd.a 00:09:24.209 SO libspdk_nbd.so.7.0 00:09:24.209 SYMLINK libspdk_nbd.so 00:09:24.209 CC lib/scsi/scsi_rpc.o 00:09:24.209 CC lib/scsi/task.o 00:09:24.209 LIB libspdk_ublk.a 00:09:24.209 CC lib/ftl/ftl_nv_cache.o 00:09:24.209 CC lib/ftl/ftl_band.o 00:09:24.209 SO libspdk_ublk.so.3.0 00:09:24.468 CC lib/ftl/ftl_band_ops.o 00:09:24.468 SYMLINK libspdk_ublk.so 00:09:24.468 CC lib/ftl/ftl_writer.o 00:09:24.468 CC lib/ftl/ftl_rq.o 00:09:24.468 CC lib/ftl/ftl_reloc.o 00:09:24.725 CC lib/nvmf/subsystem.o 00:09:24.725 LIB libspdk_scsi.a 00:09:24.725 SO libspdk_scsi.so.9.0 00:09:24.725 CC lib/ftl/ftl_l2p_cache.o 00:09:24.983 SYMLINK libspdk_scsi.so 00:09:24.983 CC lib/nvmf/nvmf.o 00:09:24.983 CC lib/nvmf/nvmf_rpc.o 00:09:24.983 CC lib/nvmf/transport.o 00:09:25.242 CC lib/ftl/ftl_p2l.o 00:09:25.242 CC lib/iscsi/conn.o 00:09:25.242 CC lib/vhost/vhost.o 00:09:25.808 CC lib/iscsi/init_grp.o 00:09:25.808 CC lib/ftl/ftl_p2l_log.o 00:09:25.808 CC lib/vhost/vhost_rpc.o 00:09:26.066 CC lib/ftl/mngt/ftl_mngt.o 00:09:26.066 CC lib/nvmf/tcp.o 00:09:26.324 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:26.324 CC lib/nvmf/stubs.o 00:09:26.324 CC lib/iscsi/iscsi.o 00:09:26.324 CC lib/nvmf/mdns_server.o 00:09:26.324 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:26.324 CC lib/nvmf/rdma.o 00:09:26.582 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:26.840 CC lib/vhost/vhost_scsi.o 00:09:26.840 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:26.840 CC lib/nvmf/auth.o 00:09:27.097 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:27.098 CC lib/iscsi/param.o 00:09:27.098 CC lib/vhost/vhost_blk.o 00:09:27.098 CC lib/iscsi/portal_grp.o 00:09:27.355 CC lib/iscsi/tgt_node.o 00:09:27.613 CC lib/iscsi/iscsi_subsystem.o 00:09:27.613 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:27.613 CC lib/iscsi/iscsi_rpc.o 00:09:27.872 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:27.872 CC lib/vhost/rte_vhost_user.o 00:09:27.872 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:28.129 CC lib/iscsi/task.o 00:09:28.129 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:28.129 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:28.129 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:28.386 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:28.386 CC lib/ftl/utils/ftl_conf.o 00:09:28.386 CC lib/ftl/utils/ftl_md.o 00:09:28.386 CC lib/ftl/utils/ftl_mempool.o 00:09:28.386 CC lib/ftl/utils/ftl_bitmap.o 00:09:28.386 CC lib/ftl/utils/ftl_property.o 00:09:28.386 LIB libspdk_iscsi.a 00:09:28.386 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:28.653 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:28.653 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:28.653 SO libspdk_iscsi.so.8.0 00:09:28.653 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:28.653 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:28.940 SYMLINK libspdk_iscsi.so 00:09:28.940 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:28.940 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:28.940 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:28.940 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:28.940 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:28.940 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:28.940 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:28.940 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:28.940 CC lib/ftl/base/ftl_base_dev.o 00:09:28.940 CC lib/ftl/base/ftl_base_bdev.o 00:09:28.940 LIB libspdk_nvmf.a 00:09:28.940 CC lib/ftl/ftl_trace.o 00:09:29.197 SO libspdk_nvmf.so.20.0 00:09:29.197 LIB libspdk_vhost.a 00:09:29.454 LIB libspdk_ftl.a 00:09:29.454 SYMLINK libspdk_nvmf.so 00:09:29.454 SO libspdk_vhost.so.8.0 00:09:29.454 SYMLINK libspdk_vhost.so 00:09:29.711 SO libspdk_ftl.so.9.0 00:09:29.968 SYMLINK libspdk_ftl.so 00:09:30.226 CC module/env_dpdk/env_dpdk_rpc.o 00:09:30.484 CC module/sock/posix/posix.o 00:09:30.484 CC module/accel/iaa/accel_iaa.o 00:09:30.484 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:30.484 CC module/accel/ioat/accel_ioat.o 00:09:30.484 CC module/accel/error/accel_error.o 00:09:30.484 CC module/fsdev/aio/fsdev_aio.o 00:09:30.484 CC module/keyring/file/keyring.o 00:09:30.484 CC module/blob/bdev/blob_bdev.o 00:09:30.484 CC module/accel/dsa/accel_dsa.o 00:09:30.484 LIB libspdk_env_dpdk_rpc.a 00:09:30.484 SO libspdk_env_dpdk_rpc.so.6.0 00:09:30.742 SYMLINK libspdk_env_dpdk_rpc.so 00:09:30.742 CC module/accel/dsa/accel_dsa_rpc.o 00:09:30.742 CC module/keyring/file/keyring_rpc.o 00:09:30.742 LIB libspdk_scheduler_dynamic.a 00:09:30.742 SO libspdk_scheduler_dynamic.so.4.0 00:09:30.742 CC module/accel/ioat/accel_ioat_rpc.o 00:09:30.742 CC module/accel/error/accel_error_rpc.o 00:09:30.742 SYMLINK libspdk_scheduler_dynamic.so 00:09:30.742 CC module/accel/iaa/accel_iaa_rpc.o 00:09:31.000 LIB libspdk_blob_bdev.a 00:09:31.000 LIB libspdk_keyring_file.a 00:09:31.000 SO libspdk_blob_bdev.so.12.0 00:09:31.000 LIB libspdk_accel_dsa.a 00:09:31.000 SO libspdk_keyring_file.so.2.0 00:09:31.000 LIB libspdk_accel_ioat.a 00:09:31.000 SO libspdk_accel_dsa.so.5.0 00:09:31.000 LIB libspdk_accel_error.a 00:09:31.000 SO libspdk_accel_ioat.so.6.0 00:09:31.000 LIB libspdk_accel_iaa.a 00:09:31.000 SYMLINK libspdk_blob_bdev.so 00:09:31.000 SO libspdk_accel_error.so.2.0 00:09:31.258 SYMLINK libspdk_keyring_file.so 00:09:31.258 CC module/keyring/linux/keyring.o 00:09:31.258 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:31.258 SYMLINK libspdk_accel_ioat.so 00:09:31.258 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:31.258 SO libspdk_accel_iaa.so.3.0 00:09:31.258 SYMLINK libspdk_accel_dsa.so 00:09:31.258 SYMLINK libspdk_accel_error.so 00:09:31.258 CC module/keyring/linux/keyring_rpc.o 00:09:31.258 SYMLINK libspdk_accel_iaa.so 00:09:31.258 CC module/fsdev/aio/linux_aio_mgr.o 00:09:31.516 CC module/scheduler/gscheduler/gscheduler.o 00:09:31.516 LIB libspdk_keyring_linux.a 00:09:31.516 LIB libspdk_scheduler_dpdk_governor.a 00:09:31.516 SO libspdk_keyring_linux.so.1.0 00:09:31.516 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:31.516 LIB libspdk_sock_posix.a 00:09:31.516 SYMLINK libspdk_keyring_linux.so 00:09:31.516 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:31.516 SO libspdk_sock_posix.so.6.0 00:09:31.516 CC module/bdev/delay/vbdev_delay.o 00:09:31.516 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:31.516 CC module/blobfs/bdev/blobfs_bdev.o 00:09:31.516 LIB libspdk_fsdev_aio.a 00:09:31.516 SO libspdk_fsdev_aio.so.1.0 00:09:31.774 CC module/bdev/error/vbdev_error.o 00:09:31.774 SYMLINK libspdk_sock_posix.so 00:09:31.774 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:31.774 LIB libspdk_scheduler_gscheduler.a 00:09:31.774 SYMLINK libspdk_fsdev_aio.so 00:09:31.774 CC module/bdev/error/vbdev_error_rpc.o 00:09:31.774 CC module/bdev/gpt/gpt.o 00:09:31.774 SO libspdk_scheduler_gscheduler.so.4.0 00:09:31.774 CC module/bdev/malloc/bdev_malloc.o 00:09:31.774 CC module/bdev/lvol/vbdev_lvol.o 00:09:31.774 SYMLINK libspdk_scheduler_gscheduler.so 00:09:31.774 CC module/bdev/gpt/vbdev_gpt.o 00:09:32.033 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:32.033 LIB libspdk_blobfs_bdev.a 00:09:32.033 SO libspdk_blobfs_bdev.so.6.0 00:09:32.033 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:32.033 SYMLINK libspdk_blobfs_bdev.so 00:09:32.291 CC module/bdev/null/bdev_null.o 00:09:32.291 LIB libspdk_bdev_delay.a 00:09:32.291 LIB libspdk_bdev_error.a 00:09:32.291 SO libspdk_bdev_delay.so.6.0 00:09:32.291 SO libspdk_bdev_error.so.6.0 00:09:32.291 SYMLINK libspdk_bdev_delay.so 00:09:32.291 LIB libspdk_bdev_gpt.a 00:09:32.291 CC module/bdev/null/bdev_null_rpc.o 00:09:32.291 CC module/bdev/nvme/bdev_nvme.o 00:09:32.291 SYMLINK libspdk_bdev_error.so 00:09:32.291 SO libspdk_bdev_gpt.so.6.0 00:09:32.291 CC module/bdev/passthru/vbdev_passthru.o 00:09:32.549 LIB libspdk_bdev_malloc.a 00:09:32.549 SYMLINK libspdk_bdev_gpt.so 00:09:32.549 SO libspdk_bdev_malloc.so.6.0 00:09:32.549 CC module/bdev/raid/bdev_raid.o 00:09:32.549 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:32.549 SYMLINK libspdk_bdev_malloc.so 00:09:32.549 CC module/bdev/nvme/nvme_rpc.o 00:09:32.808 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:32.808 CC module/bdev/split/vbdev_split.o 00:09:32.808 LIB libspdk_bdev_null.a 00:09:32.808 LIB libspdk_bdev_lvol.a 00:09:32.808 SO libspdk_bdev_null.so.6.0 00:09:32.808 SO libspdk_bdev_lvol.so.6.0 00:09:32.808 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:32.808 SYMLINK libspdk_bdev_lvol.so 00:09:32.808 CC module/bdev/nvme/bdev_mdns_client.o 00:09:32.808 CC module/bdev/nvme/vbdev_opal.o 00:09:32.808 SYMLINK libspdk_bdev_null.so 00:09:32.809 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:33.067 LIB libspdk_bdev_passthru.a 00:09:33.067 SO libspdk_bdev_passthru.so.6.0 00:09:33.067 CC module/bdev/raid/bdev_raid_rpc.o 00:09:33.067 CC module/bdev/split/vbdev_split_rpc.o 00:09:33.067 SYMLINK libspdk_bdev_passthru.so 00:09:33.067 CC module/bdev/raid/bdev_raid_sb.o 00:09:33.326 CC module/bdev/raid/raid0.o 00:09:33.326 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:33.326 CC module/bdev/raid/raid1.o 00:09:33.326 CC module/bdev/raid/concat.o 00:09:33.584 LIB libspdk_bdev_split.a 00:09:33.584 SO libspdk_bdev_split.so.6.0 00:09:33.584 LIB libspdk_bdev_zone_block.a 00:09:33.584 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:33.584 SYMLINK libspdk_bdev_split.so 00:09:33.584 SO libspdk_bdev_zone_block.so.6.0 00:09:33.842 SYMLINK libspdk_bdev_zone_block.so 00:09:33.842 CC module/bdev/aio/bdev_aio.o 00:09:33.842 CC module/bdev/aio/bdev_aio_rpc.o 00:09:33.842 CC module/bdev/ftl/bdev_ftl.o 00:09:33.842 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:34.099 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:34.099 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:34.099 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:34.099 CC module/bdev/iscsi/bdev_iscsi.o 00:09:34.099 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:34.357 LIB libspdk_bdev_aio.a 00:09:34.357 SO libspdk_bdev_aio.so.6.0 00:09:34.357 SYMLINK libspdk_bdev_aio.so 00:09:34.357 LIB libspdk_bdev_ftl.a 00:09:34.357 LIB libspdk_bdev_raid.a 00:09:34.357 SO libspdk_bdev_ftl.so.6.0 00:09:34.659 SO libspdk_bdev_raid.so.6.0 00:09:34.659 LIB libspdk_bdev_iscsi.a 00:09:34.659 SYMLINK libspdk_bdev_ftl.so 00:09:34.659 SO libspdk_bdev_iscsi.so.6.0 00:09:34.659 SYMLINK libspdk_bdev_raid.so 00:09:34.659 SYMLINK libspdk_bdev_iscsi.so 00:09:34.659 LIB libspdk_bdev_virtio.a 00:09:34.917 SO libspdk_bdev_virtio.so.6.0 00:09:34.917 SYMLINK libspdk_bdev_virtio.so 00:09:35.852 LIB libspdk_bdev_nvme.a 00:09:36.111 SO libspdk_bdev_nvme.so.7.1 00:09:36.111 SYMLINK libspdk_bdev_nvme.so 00:09:36.679 CC module/event/subsystems/keyring/keyring.o 00:09:36.679 CC module/event/subsystems/sock/sock.o 00:09:36.679 CC module/event/subsystems/fsdev/fsdev.o 00:09:36.679 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:36.679 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:36.679 CC module/event/subsystems/vmd/vmd.o 00:09:36.679 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:36.679 CC module/event/subsystems/iobuf/iobuf.o 00:09:36.679 CC module/event/subsystems/scheduler/scheduler.o 00:09:36.679 LIB libspdk_event_keyring.a 00:09:36.679 LIB libspdk_event_sock.a 00:09:36.937 SO libspdk_event_keyring.so.1.0 00:09:36.937 LIB libspdk_event_scheduler.a 00:09:36.937 SO libspdk_event_sock.so.5.0 00:09:36.937 LIB libspdk_event_fsdev.a 00:09:36.937 LIB libspdk_event_vhost_blk.a 00:09:36.937 SO libspdk_event_scheduler.so.4.0 00:09:36.937 LIB libspdk_event_vmd.a 00:09:36.937 SO libspdk_event_fsdev.so.1.0 00:09:36.937 SO libspdk_event_vhost_blk.so.3.0 00:09:36.937 SYMLINK libspdk_event_keyring.so 00:09:36.937 SO libspdk_event_vmd.so.6.0 00:09:36.937 SYMLINK libspdk_event_sock.so 00:09:36.937 LIB libspdk_event_iobuf.a 00:09:36.937 SYMLINK libspdk_event_scheduler.so 00:09:36.937 SYMLINK libspdk_event_fsdev.so 00:09:36.937 SO libspdk_event_iobuf.so.3.0 00:09:36.937 SYMLINK libspdk_event_vhost_blk.so 00:09:36.937 SYMLINK libspdk_event_vmd.so 00:09:36.937 SYMLINK libspdk_event_iobuf.so 00:09:37.195 CC module/event/subsystems/accel/accel.o 00:09:37.453 LIB libspdk_event_accel.a 00:09:37.454 SO libspdk_event_accel.so.6.0 00:09:37.454 SYMLINK libspdk_event_accel.so 00:09:37.713 CC module/event/subsystems/bdev/bdev.o 00:09:37.971 LIB libspdk_event_bdev.a 00:09:37.971 SO libspdk_event_bdev.so.6.0 00:09:37.971 SYMLINK libspdk_event_bdev.so 00:09:38.229 CC module/event/subsystems/ublk/ublk.o 00:09:38.229 CC module/event/subsystems/scsi/scsi.o 00:09:38.229 CC module/event/subsystems/nbd/nbd.o 00:09:38.229 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:38.229 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:38.487 LIB libspdk_event_ublk.a 00:09:38.487 LIB libspdk_event_nbd.a 00:09:38.487 SO libspdk_event_ublk.so.3.0 00:09:38.487 LIB libspdk_event_scsi.a 00:09:38.487 SO libspdk_event_nbd.so.6.0 00:09:38.487 SO libspdk_event_scsi.so.6.0 00:09:38.487 SYMLINK libspdk_event_ublk.so 00:09:38.487 LIB libspdk_event_nvmf.a 00:09:38.487 SYMLINK libspdk_event_nbd.so 00:09:38.487 SYMLINK libspdk_event_scsi.so 00:09:38.487 SO libspdk_event_nvmf.so.6.0 00:09:38.746 SYMLINK libspdk_event_nvmf.so 00:09:38.746 CC module/event/subsystems/iscsi/iscsi.o 00:09:38.746 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:39.004 LIB libspdk_event_iscsi.a 00:09:39.004 LIB libspdk_event_vhost_scsi.a 00:09:39.004 SO libspdk_event_vhost_scsi.so.3.0 00:09:39.004 SO libspdk_event_iscsi.so.6.0 00:09:39.262 SYMLINK libspdk_event_vhost_scsi.so 00:09:39.262 SYMLINK libspdk_event_iscsi.so 00:09:39.262 SO libspdk.so.6.0 00:09:39.262 SYMLINK libspdk.so 00:09:39.520 CC app/spdk_nvme_perf/perf.o 00:09:39.520 CC app/trace_record/trace_record.o 00:09:39.520 CC app/spdk_lspci/spdk_lspci.o 00:09:39.520 CXX app/trace/trace.o 00:09:39.777 CC app/nvmf_tgt/nvmf_main.o 00:09:39.777 CC app/iscsi_tgt/iscsi_tgt.o 00:09:39.777 CC examples/util/zipf/zipf.o 00:09:39.777 CC test/thread/poller_perf/poller_perf.o 00:09:39.777 CC app/spdk_tgt/spdk_tgt.o 00:09:39.777 LINK spdk_lspci 00:09:39.777 CC test/dma/test_dma/test_dma.o 00:09:40.036 LINK zipf 00:09:40.036 LINK nvmf_tgt 00:09:40.036 LINK poller_perf 00:09:40.036 LINK spdk_trace_record 00:09:40.036 LINK iscsi_tgt 00:09:40.294 LINK spdk_tgt 00:09:40.294 LINK spdk_trace 00:09:40.294 CC examples/ioat/perf/perf.o 00:09:40.565 CC app/spdk_nvme_identify/identify.o 00:09:40.565 CC examples/ioat/verify/verify.o 00:09:40.565 CC app/spdk_nvme_discover/discovery_aer.o 00:09:40.565 CC test/app/bdev_svc/bdev_svc.o 00:09:40.565 CC app/spdk_top/spdk_top.o 00:09:40.831 LINK ioat_perf 00:09:40.831 LINK test_dma 00:09:40.831 TEST_HEADER include/spdk/accel.h 00:09:40.831 TEST_HEADER include/spdk/accel_module.h 00:09:40.831 TEST_HEADER include/spdk/assert.h 00:09:40.831 TEST_HEADER include/spdk/barrier.h 00:09:40.831 TEST_HEADER include/spdk/base64.h 00:09:40.831 TEST_HEADER include/spdk/bdev.h 00:09:40.831 TEST_HEADER include/spdk/bdev_module.h 00:09:40.831 TEST_HEADER include/spdk/bdev_zone.h 00:09:40.831 TEST_HEADER include/spdk/bit_array.h 00:09:40.831 TEST_HEADER include/spdk/bit_pool.h 00:09:40.831 TEST_HEADER include/spdk/blob_bdev.h 00:09:40.831 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:40.831 TEST_HEADER include/spdk/blobfs.h 00:09:40.831 CC app/spdk_dd/spdk_dd.o 00:09:40.831 TEST_HEADER include/spdk/blob.h 00:09:40.831 TEST_HEADER include/spdk/conf.h 00:09:40.831 TEST_HEADER include/spdk/config.h 00:09:40.831 TEST_HEADER include/spdk/cpuset.h 00:09:40.831 TEST_HEADER include/spdk/crc16.h 00:09:40.831 TEST_HEADER include/spdk/crc32.h 00:09:40.831 TEST_HEADER include/spdk/crc64.h 00:09:40.831 TEST_HEADER include/spdk/dif.h 00:09:40.831 TEST_HEADER include/spdk/dma.h 00:09:40.831 TEST_HEADER include/spdk/endian.h 00:09:40.831 TEST_HEADER include/spdk/env_dpdk.h 00:09:40.831 TEST_HEADER include/spdk/env.h 00:09:40.831 TEST_HEADER include/spdk/event.h 00:09:40.831 TEST_HEADER include/spdk/fd_group.h 00:09:40.831 TEST_HEADER include/spdk/fd.h 00:09:40.831 TEST_HEADER include/spdk/file.h 00:09:40.831 TEST_HEADER include/spdk/fsdev.h 00:09:40.831 TEST_HEADER include/spdk/fsdev_module.h 00:09:40.831 TEST_HEADER include/spdk/ftl.h 00:09:40.831 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:40.831 TEST_HEADER include/spdk/gpt_spec.h 00:09:40.831 TEST_HEADER include/spdk/hexlify.h 00:09:40.831 LINK verify 00:09:40.831 TEST_HEADER include/spdk/histogram_data.h 00:09:40.831 TEST_HEADER include/spdk/idxd.h 00:09:40.831 TEST_HEADER include/spdk/idxd_spec.h 00:09:40.831 TEST_HEADER include/spdk/init.h 00:09:40.831 TEST_HEADER include/spdk/ioat.h 00:09:40.831 TEST_HEADER include/spdk/ioat_spec.h 00:09:40.831 TEST_HEADER include/spdk/iscsi_spec.h 00:09:40.831 TEST_HEADER include/spdk/json.h 00:09:40.831 TEST_HEADER include/spdk/jsonrpc.h 00:09:40.831 TEST_HEADER include/spdk/keyring.h 00:09:40.831 TEST_HEADER include/spdk/keyring_module.h 00:09:40.831 TEST_HEADER include/spdk/likely.h 00:09:40.831 TEST_HEADER include/spdk/log.h 00:09:40.831 TEST_HEADER include/spdk/lvol.h 00:09:40.831 TEST_HEADER include/spdk/md5.h 00:09:40.831 TEST_HEADER include/spdk/memory.h 00:09:40.831 TEST_HEADER include/spdk/mmio.h 00:09:40.831 LINK spdk_nvme_discover 00:09:40.831 TEST_HEADER include/spdk/nbd.h 00:09:40.831 TEST_HEADER include/spdk/net.h 00:09:40.831 TEST_HEADER include/spdk/notify.h 00:09:40.831 TEST_HEADER include/spdk/nvme.h 00:09:40.831 TEST_HEADER include/spdk/nvme_intel.h 00:09:40.831 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:40.831 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:40.831 LINK spdk_nvme_perf 00:09:40.831 TEST_HEADER include/spdk/nvme_spec.h 00:09:40.831 TEST_HEADER include/spdk/nvme_zns.h 00:09:40.831 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:40.831 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:40.831 TEST_HEADER include/spdk/nvmf.h 00:09:40.831 TEST_HEADER include/spdk/nvmf_spec.h 00:09:40.831 LINK bdev_svc 00:09:41.088 TEST_HEADER include/spdk/nvmf_transport.h 00:09:41.088 TEST_HEADER include/spdk/opal.h 00:09:41.088 TEST_HEADER include/spdk/opal_spec.h 00:09:41.088 TEST_HEADER include/spdk/pci_ids.h 00:09:41.088 TEST_HEADER include/spdk/pipe.h 00:09:41.088 TEST_HEADER include/spdk/queue.h 00:09:41.088 TEST_HEADER include/spdk/reduce.h 00:09:41.088 TEST_HEADER include/spdk/rpc.h 00:09:41.088 TEST_HEADER include/spdk/scheduler.h 00:09:41.088 TEST_HEADER include/spdk/scsi.h 00:09:41.088 TEST_HEADER include/spdk/scsi_spec.h 00:09:41.088 TEST_HEADER include/spdk/sock.h 00:09:41.088 TEST_HEADER include/spdk/stdinc.h 00:09:41.088 TEST_HEADER include/spdk/string.h 00:09:41.088 TEST_HEADER include/spdk/thread.h 00:09:41.088 TEST_HEADER include/spdk/trace.h 00:09:41.088 TEST_HEADER include/spdk/trace_parser.h 00:09:41.088 TEST_HEADER include/spdk/tree.h 00:09:41.088 TEST_HEADER include/spdk/ublk.h 00:09:41.088 TEST_HEADER include/spdk/util.h 00:09:41.088 TEST_HEADER include/spdk/uuid.h 00:09:41.088 TEST_HEADER include/spdk/version.h 00:09:41.088 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:41.088 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:41.088 TEST_HEADER include/spdk/vhost.h 00:09:41.088 TEST_HEADER include/spdk/vmd.h 00:09:41.088 TEST_HEADER include/spdk/xor.h 00:09:41.088 TEST_HEADER include/spdk/zipf.h 00:09:41.088 CXX test/cpp_headers/accel.o 00:09:41.345 CC test/env/vtophys/vtophys.o 00:09:41.345 CC test/env/mem_callbacks/mem_callbacks.o 00:09:41.345 LINK spdk_dd 00:09:41.345 CXX test/cpp_headers/accel_module.o 00:09:41.345 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:41.603 LINK vtophys 00:09:41.603 CC app/fio/nvme/fio_plugin.o 00:09:41.860 CXX test/cpp_headers/assert.o 00:09:41.860 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:41.860 LINK env_dpdk_post_init 00:09:41.860 CC test/app/histogram_perf/histogram_perf.o 00:09:41.860 LINK spdk_nvme_identify 00:09:42.118 CC test/app/jsoncat/jsoncat.o 00:09:42.118 CXX test/cpp_headers/barrier.o 00:09:42.118 LINK histogram_perf 00:09:42.118 CC test/app/stub/stub.o 00:09:42.376 LINK jsoncat 00:09:42.376 LINK spdk_top 00:09:42.376 CXX test/cpp_headers/base64.o 00:09:42.376 LINK mem_callbacks 00:09:42.376 CC app/vhost/vhost.o 00:09:42.635 LINK stub 00:09:42.635 LINK spdk_nvme 00:09:42.635 CC app/fio/bdev/fio_plugin.o 00:09:42.635 LINK nvme_fuzz 00:09:42.635 CXX test/cpp_headers/bdev.o 00:09:42.635 CC test/env/memory/memory_ut.o 00:09:42.893 LINK vhost 00:09:42.893 CC test/event/event_perf/event_perf.o 00:09:42.893 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:42.893 CC test/event/reactor/reactor.o 00:09:42.893 CC test/event/reactor_perf/reactor_perf.o 00:09:43.151 CXX test/cpp_headers/bdev_module.o 00:09:43.151 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:43.151 LINK event_perf 00:09:43.151 LINK reactor 00:09:43.151 LINK interrupt_tgt 00:09:43.410 LINK reactor_perf 00:09:43.410 CC test/event/app_repeat/app_repeat.o 00:09:43.671 LINK spdk_bdev 00:09:43.671 CXX test/cpp_headers/bdev_zone.o 00:09:43.671 CXX test/cpp_headers/bit_array.o 00:09:43.929 LINK app_repeat 00:09:43.929 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:43.929 CC examples/sock/hello_world/hello_sock.o 00:09:43.929 CC test/event/scheduler/scheduler.o 00:09:43.929 CC examples/thread/thread/thread_ex.o 00:09:44.187 CXX test/cpp_headers/bit_pool.o 00:09:44.187 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:44.187 LINK hello_sock 00:09:44.446 CC examples/vmd/lsvmd/lsvmd.o 00:09:44.446 CC examples/idxd/perf/perf.o 00:09:44.446 LINK thread 00:09:44.446 LINK scheduler 00:09:44.446 CXX test/cpp_headers/blob_bdev.o 00:09:44.705 LINK lsvmd 00:09:44.705 CC examples/vmd/led/led.o 00:09:44.962 CXX test/cpp_headers/blobfs_bdev.o 00:09:44.962 LINK vhost_fuzz 00:09:44.962 CC test/env/pci/pci_ut.o 00:09:44.962 LINK led 00:09:44.962 LINK idxd_perf 00:09:45.221 LINK memory_ut 00:09:45.221 CC examples/nvme/hello_world/hello_world.o 00:09:45.221 CXX test/cpp_headers/blobfs.o 00:09:45.221 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:45.479 CC examples/nvme/reconnect/reconnect.o 00:09:45.479 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:45.479 LINK hello_world 00:09:45.738 CC examples/accel/perf/accel_perf.o 00:09:45.738 CXX test/cpp_headers/blob.o 00:09:45.738 CC examples/nvme/arbitration/arbitration.o 00:09:45.738 LINK pci_ut 00:09:45.738 LINK hello_fsdev 00:09:46.001 CXX test/cpp_headers/conf.o 00:09:46.001 CC examples/nvme/hotplug/hotplug.o 00:09:46.303 CXX test/cpp_headers/config.o 00:09:46.303 CXX test/cpp_headers/cpuset.o 00:09:46.303 LINK reconnect 00:09:46.303 CXX test/cpp_headers/crc16.o 00:09:46.583 LINK arbitration 00:09:46.583 CC test/nvme/aer/aer.o 00:09:46.583 LINK nvme_manage 00:09:46.583 LINK iscsi_fuzz 00:09:46.583 LINK hotplug 00:09:46.842 CXX test/cpp_headers/crc32.o 00:09:46.842 LINK accel_perf 00:09:46.842 CC test/rpc_client/rpc_client_test.o 00:09:46.842 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:47.101 CXX test/cpp_headers/crc64.o 00:09:47.101 CC examples/nvme/abort/abort.o 00:09:47.101 LINK aer 00:09:47.101 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:47.101 LINK cmb_copy 00:09:47.359 LINK rpc_client_test 00:09:47.359 CC test/nvme/reset/reset.o 00:09:47.359 CC examples/blob/cli/blobcli.o 00:09:47.359 CC examples/blob/hello_world/hello_blob.o 00:09:47.359 CXX test/cpp_headers/dif.o 00:09:47.359 LINK pmr_persistence 00:09:47.617 CXX test/cpp_headers/dma.o 00:09:47.617 LINK abort 00:09:47.617 CC examples/bdev/hello_world/hello_bdev.o 00:09:47.876 CXX test/cpp_headers/endian.o 00:09:47.876 LINK reset 00:09:47.876 LINK hello_blob 00:09:47.876 CC examples/bdev/bdevperf/bdevperf.o 00:09:48.135 CC test/accel/dif/dif.o 00:09:48.393 CXX test/cpp_headers/env_dpdk.o 00:09:48.393 CXX test/cpp_headers/env.o 00:09:48.650 CC test/nvme/sgl/sgl.o 00:09:48.650 LINK hello_bdev 00:09:48.650 LINK blobcli 00:09:48.650 CC test/nvme/e2edp/nvme_dp.o 00:09:48.907 CC test/blobfs/mkfs/mkfs.o 00:09:48.907 CXX test/cpp_headers/event.o 00:09:49.169 CC test/nvme/overhead/overhead.o 00:09:49.169 CXX test/cpp_headers/fd_group.o 00:09:49.427 LINK sgl 00:09:49.427 CXX test/cpp_headers/fd.o 00:09:49.427 CC test/nvme/err_injection/err_injection.o 00:09:49.427 LINK mkfs 00:09:49.427 LINK nvme_dp 00:09:49.685 CC test/lvol/esnap/esnap.o 00:09:49.685 LINK overhead 00:09:49.685 CXX test/cpp_headers/file.o 00:09:49.944 LINK err_injection 00:09:49.944 LINK dif 00:09:49.944 CC test/nvme/startup/startup.o 00:09:49.944 CC test/nvme/reserve/reserve.o 00:09:49.944 LINK bdevperf 00:09:50.202 CC test/nvme/simple_copy/simple_copy.o 00:09:50.202 CXX test/cpp_headers/fsdev.o 00:09:50.202 CC test/nvme/connect_stress/connect_stress.o 00:09:50.202 LINK startup 00:09:50.460 CC test/nvme/boot_partition/boot_partition.o 00:09:50.460 LINK reserve 00:09:50.460 CXX test/cpp_headers/fsdev_module.o 00:09:50.460 CC test/nvme/compliance/nvme_compliance.o 00:09:50.719 LINK simple_copy 00:09:50.719 LINK connect_stress 00:09:50.719 CXX test/cpp_headers/ftl.o 00:09:50.720 LINK boot_partition 00:09:50.978 LINK nvme_compliance 00:09:50.978 CXX test/cpp_headers/fuse_dispatcher.o 00:09:50.978 CC examples/nvmf/nvmf/nvmf.o 00:09:50.978 CC test/nvme/fused_ordering/fused_ordering.o 00:09:50.978 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:51.237 CXX test/cpp_headers/gpt_spec.o 00:09:51.237 CXX test/cpp_headers/hexlify.o 00:09:51.237 CC test/nvme/fdp/fdp.o 00:09:51.494 CC test/bdev/bdevio/bdevio.o 00:09:51.494 LINK doorbell_aers 00:09:51.494 CC test/nvme/cuse/cuse.o 00:09:51.494 LINK fused_ordering 00:09:51.494 CXX test/cpp_headers/histogram_data.o 00:09:51.494 LINK nvmf 00:09:51.751 CXX test/cpp_headers/idxd.o 00:09:51.751 CXX test/cpp_headers/idxd_spec.o 00:09:51.751 CXX test/cpp_headers/init.o 00:09:51.751 CXX test/cpp_headers/ioat.o 00:09:52.009 CXX test/cpp_headers/ioat_spec.o 00:09:52.009 LINK fdp 00:09:52.009 CXX test/cpp_headers/iscsi_spec.o 00:09:52.009 CXX test/cpp_headers/json.o 00:09:52.009 CXX test/cpp_headers/jsonrpc.o 00:09:52.267 CXX test/cpp_headers/keyring.o 00:09:52.267 LINK bdevio 00:09:52.267 CXX test/cpp_headers/keyring_module.o 00:09:52.267 CXX test/cpp_headers/likely.o 00:09:52.267 CXX test/cpp_headers/log.o 00:09:52.589 CXX test/cpp_headers/lvol.o 00:09:52.589 CXX test/cpp_headers/md5.o 00:09:52.589 CXX test/cpp_headers/memory.o 00:09:52.589 CXX test/cpp_headers/mmio.o 00:09:52.589 CXX test/cpp_headers/nbd.o 00:09:52.589 CXX test/cpp_headers/net.o 00:09:52.851 CXX test/cpp_headers/notify.o 00:09:52.851 CXX test/cpp_headers/nvme.o 00:09:52.851 CXX test/cpp_headers/nvme_intel.o 00:09:52.851 CXX test/cpp_headers/nvme_ocssd.o 00:09:52.851 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:52.851 CXX test/cpp_headers/nvme_spec.o 00:09:53.109 CXX test/cpp_headers/nvme_zns.o 00:09:53.109 CXX test/cpp_headers/nvmf_cmd.o 00:09:53.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:53.367 CXX test/cpp_headers/nvmf.o 00:09:53.367 CXX test/cpp_headers/nvmf_spec.o 00:09:53.367 CXX test/cpp_headers/nvmf_transport.o 00:09:53.367 CXX test/cpp_headers/opal.o 00:09:53.367 CXX test/cpp_headers/opal_spec.o 00:09:53.625 CXX test/cpp_headers/pci_ids.o 00:09:53.625 CXX test/cpp_headers/pipe.o 00:09:53.625 CXX test/cpp_headers/queue.o 00:09:53.625 CXX test/cpp_headers/reduce.o 00:09:53.625 CXX test/cpp_headers/rpc.o 00:09:53.625 CXX test/cpp_headers/scheduler.o 00:09:53.625 CXX test/cpp_headers/scsi.o 00:09:53.882 CXX test/cpp_headers/scsi_spec.o 00:09:53.882 CXX test/cpp_headers/sock.o 00:09:53.882 CXX test/cpp_headers/stdinc.o 00:09:53.882 CXX test/cpp_headers/string.o 00:09:53.882 CXX test/cpp_headers/thread.o 00:09:53.882 CXX test/cpp_headers/trace.o 00:09:54.139 LINK cuse 00:09:54.139 CXX test/cpp_headers/trace_parser.o 00:09:54.139 CXX test/cpp_headers/tree.o 00:09:54.139 CXX test/cpp_headers/ublk.o 00:09:54.139 CXX test/cpp_headers/util.o 00:09:54.139 CXX test/cpp_headers/uuid.o 00:09:54.139 CXX test/cpp_headers/version.o 00:09:54.139 CXX test/cpp_headers/vfio_user_pci.o 00:09:54.139 CXX test/cpp_headers/vfio_user_spec.o 00:09:54.139 CXX test/cpp_headers/vhost.o 00:09:54.396 CXX test/cpp_headers/vmd.o 00:09:54.396 CXX test/cpp_headers/xor.o 00:09:54.396 CXX test/cpp_headers/zipf.o 00:09:57.676 LINK esnap 00:09:58.244 00:09:58.244 real 2m21.481s 00:09:58.244 user 13m49.949s 00:09:58.244 sys 2m22.101s 00:09:58.244 13:04:04 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:58.244 13:04:04 make -- common/autotest_common.sh@10 -- $ set +x 00:09:58.244 ************************************ 00:09:58.244 END TEST make 00:09:58.244 ************************************ 00:09:58.244 13:04:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:58.244 13:04:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:58.244 13:04:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:58.244 13:04:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.244 13:04:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:58.244 13:04:04 -- pm/common@44 -- $ pid=5307 00:09:58.244 13:04:04 -- pm/common@50 -- $ kill -TERM 5307 00:09:58.244 13:04:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.244 13:04:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:58.244 13:04:04 -- pm/common@44 -- $ pid=5308 00:09:58.244 13:04:04 -- pm/common@50 -- $ kill -TERM 5308 00:09:58.244 13:04:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:58.244 13:04:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:58.503 13:04:04 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.503 13:04:04 -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.503 13:04:04 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.503 13:04:04 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.503 13:04:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.503 13:04:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.503 13:04:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.503 13:04:04 -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.503 13:04:04 -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.503 13:04:04 -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.503 13:04:04 -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.503 13:04:04 -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.503 13:04:04 -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.503 13:04:04 -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.503 13:04:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.503 13:04:04 -- scripts/common.sh@344 -- # case "$op" in 00:09:58.503 13:04:04 -- scripts/common.sh@345 -- # : 1 00:09:58.503 13:04:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.503 13:04:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.503 13:04:04 -- scripts/common.sh@365 -- # decimal 1 00:09:58.503 13:04:04 -- scripts/common.sh@353 -- # local d=1 00:09:58.503 13:04:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.503 13:04:04 -- scripts/common.sh@355 -- # echo 1 00:09:58.503 13:04:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.503 13:04:04 -- scripts/common.sh@366 -- # decimal 2 00:09:58.504 13:04:04 -- scripts/common.sh@353 -- # local d=2 00:09:58.504 13:04:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.504 13:04:04 -- scripts/common.sh@355 -- # echo 2 00:09:58.504 13:04:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.504 13:04:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.504 13:04:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.504 13:04:04 -- scripts/common.sh@368 -- # return 0 00:09:58.504 13:04:04 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.504 13:04:04 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.504 --rc genhtml_branch_coverage=1 00:09:58.504 --rc genhtml_function_coverage=1 00:09:58.504 --rc genhtml_legend=1 00:09:58.504 --rc geninfo_all_blocks=1 00:09:58.504 --rc geninfo_unexecuted_blocks=1 00:09:58.504 00:09:58.504 ' 00:09:58.504 13:04:04 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.504 --rc genhtml_branch_coverage=1 00:09:58.504 --rc genhtml_function_coverage=1 00:09:58.504 --rc genhtml_legend=1 00:09:58.504 --rc geninfo_all_blocks=1 00:09:58.504 --rc geninfo_unexecuted_blocks=1 00:09:58.504 00:09:58.504 ' 00:09:58.504 13:04:04 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.504 --rc genhtml_branch_coverage=1 00:09:58.504 --rc genhtml_function_coverage=1 00:09:58.504 --rc genhtml_legend=1 00:09:58.504 --rc geninfo_all_blocks=1 00:09:58.504 --rc geninfo_unexecuted_blocks=1 00:09:58.504 00:09:58.504 ' 00:09:58.504 13:04:04 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.504 --rc genhtml_branch_coverage=1 00:09:58.504 --rc genhtml_function_coverage=1 00:09:58.504 --rc genhtml_legend=1 00:09:58.504 --rc geninfo_all_blocks=1 00:09:58.504 --rc geninfo_unexecuted_blocks=1 00:09:58.504 00:09:58.504 ' 00:09:58.504 13:04:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.504 13:04:04 -- nvmf/common.sh@7 -- # uname -s 00:09:58.504 13:04:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.504 13:04:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.504 13:04:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.504 13:04:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.504 13:04:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.504 13:04:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.504 13:04:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.504 13:04:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.504 13:04:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.504 13:04:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.504 13:04:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:09:58.504 13:04:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:09:58.504 13:04:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.504 13:04:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.504 13:04:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.504 13:04:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.504 13:04:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.504 13:04:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.504 13:04:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.504 13:04:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.504 13:04:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.504 13:04:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.504 13:04:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.504 13:04:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.504 13:04:04 -- paths/export.sh@5 -- # export PATH 00:09:58.504 13:04:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.504 13:04:04 -- nvmf/common.sh@51 -- # : 0 00:09:58.504 13:04:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.504 13:04:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.504 13:04:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.504 13:04:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.504 13:04:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.504 13:04:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.504 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.504 13:04:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.504 13:04:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.504 13:04:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.504 13:04:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:58.504 13:04:04 -- spdk/autotest.sh@32 -- # uname -s 00:09:58.504 13:04:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:58.504 13:04:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:58.504 13:04:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:58.504 13:04:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:58.504 13:04:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:58.504 13:04:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:58.504 13:04:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:58.504 13:04:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:58.504 13:04:04 -- spdk/autotest.sh@48 -- # udevadm_pid=56661 00:09:58.504 13:04:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:58.504 13:04:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:58.504 13:04:04 -- pm/common@17 -- # local monitor 00:09:58.504 13:04:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.504 13:04:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.504 13:04:04 -- pm/common@25 -- # sleep 1 00:09:58.504 13:04:04 -- pm/common@21 -- # date +%s 00:09:58.504 13:04:04 -- pm/common@21 -- # date +%s 00:09:58.504 13:04:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490244 00:09:58.504 13:04:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490244 00:09:58.770 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490244_collect-vmstat.pm.log 00:09:58.770 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490244_collect-cpu-load.pm.log 00:09:59.765 13:04:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:59.765 13:04:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:59.765 13:04:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.765 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:09:59.765 13:04:05 -- spdk/autotest.sh@59 -- # create_test_list 00:09:59.765 13:04:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:09:59.765 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:09:59.765 13:04:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:59.765 13:04:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:59.765 13:04:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:59.765 13:04:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:59.765 13:04:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:59.765 13:04:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:59.765 13:04:06 -- common/autotest_common.sh@1457 -- # uname 00:09:59.765 13:04:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:09:59.765 13:04:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:59.765 13:04:06 -- common/autotest_common.sh@1477 -- # uname 00:09:59.765 13:04:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:09:59.765 13:04:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:59.765 13:04:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:59.765 lcov: LCOV version 1.15 00:09:59.765 13:04:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:17.866 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:17.866 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:39.792 13:04:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:39.792 13:04:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.792 13:04:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.792 13:04:42 -- spdk/autotest.sh@78 -- # rm -f 00:10:39.792 13:04:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:39.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.792 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:39.792 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:39.792 13:04:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:39.792 13:04:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:39.792 13:04:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:39.792 13:04:43 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:39.792 13:04:43 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:39.792 13:04:43 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:39.792 13:04:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:39.792 13:04:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:39.792 13:04:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:39.792 13:04:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:39.792 13:04:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:39.792 13:04:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:39.792 13:04:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:39.792 13:04:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:39.792 13:04:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:39.792 13:04:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:39.792 13:04:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:39.792 13:04:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:39.792 13:04:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:39.792 13:04:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:39.792 13:04:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:39.792 13:04:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:10:39.792 13:04:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:10:39.792 13:04:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:39.792 13:04:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:39.792 13:04:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:39.792 13:04:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:10:39.792 13:04:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:10:39.792 13:04:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:39.792 13:04:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:39.792 13:04:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:39.792 13:04:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:39.792 13:04:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:39.792 13:04:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:39.792 13:04:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:39.792 13:04:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:39.792 No valid GPT data, bailing 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # pt= 00:10:39.792 13:04:43 -- scripts/common.sh@395 -- # return 1 00:10:39.792 13:04:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:39.792 1+0 records in 00:10:39.792 1+0 records out 00:10:39.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427361 s, 245 MB/s 00:10:39.792 13:04:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:39.792 13:04:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:39.792 13:04:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:10:39.792 13:04:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:10:39.792 13:04:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:39.792 No valid GPT data, bailing 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # pt= 00:10:39.792 13:04:43 -- scripts/common.sh@395 -- # return 1 00:10:39.792 13:04:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:39.792 1+0 records in 00:10:39.792 1+0 records out 00:10:39.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459715 s, 228 MB/s 00:10:39.792 13:04:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:39.792 13:04:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:39.792 13:04:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:10:39.792 13:04:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:10:39.792 13:04:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:39.792 No valid GPT data, bailing 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # pt= 00:10:39.792 13:04:43 -- scripts/common.sh@395 -- # return 1 00:10:39.792 13:04:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:39.792 1+0 records in 00:10:39.792 1+0 records out 00:10:39.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408869 s, 256 MB/s 00:10:39.792 13:04:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:39.792 13:04:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:39.792 13:04:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:10:39.792 13:04:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:10:39.792 13:04:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:39.792 No valid GPT data, bailing 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:39.792 13:04:43 -- scripts/common.sh@394 -- # pt= 00:10:39.792 13:04:43 -- scripts/common.sh@395 -- # return 1 00:10:39.792 13:04:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:39.792 1+0 records in 00:10:39.792 1+0 records out 00:10:39.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361822 s, 290 MB/s 00:10:39.792 13:04:43 -- spdk/autotest.sh@105 -- # sync 00:10:39.792 13:04:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:39.792 13:04:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:39.792 13:04:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:39.792 13:04:45 -- spdk/autotest.sh@111 -- # uname -s 00:10:39.792 13:04:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:39.792 13:04:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:39.792 13:04:45 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:39.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.792 Hugepages 00:10:39.792 node hugesize free / total 00:10:39.792 node0 1048576kB 0 / 0 00:10:39.792 node0 2048kB 0 / 0 00:10:39.792 00:10:39.792 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:40.051 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:40.051 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:40.051 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:40.051 13:04:46 -- spdk/autotest.sh@117 -- # uname -s 00:10:40.051 13:04:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:40.051 13:04:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:40.051 13:04:46 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:40.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:40.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:40.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:40.930 13:04:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:10:41.863 13:04:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:10:41.863 13:04:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:10:41.863 13:04:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:41.863 13:04:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:41.863 13:04:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:41.863 13:04:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:41.863 13:04:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:41.863 13:04:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:41.863 13:04:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:42.120 13:04:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:10:42.120 13:04:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:42.120 13:04:48 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:42.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.378 Waiting for block devices as requested 00:10:42.378 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.378 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.636 13:04:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:42.636 13:04:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:42.636 13:04:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:42.636 13:04:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:10:42.636 13:04:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:42.636 13:04:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:42.636 13:04:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:42.636 13:04:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:10:42.636 13:04:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:10:42.636 13:04:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:10:42.636 13:04:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:10:42.636 13:04:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:42.636 13:04:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:42.636 13:04:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:42.636 13:04:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:42.636 13:04:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:42.636 13:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:10:42.636 13:04:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:42.636 13:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:42.636 13:04:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:42.636 13:04:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:42.636 13:04:49 -- common/autotest_common.sh@1543 -- # continue 00:10:42.636 13:04:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:42.636 13:04:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:42.637 13:04:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:42.637 13:04:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:10:42.637 13:04:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:42.637 13:04:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:42.637 13:04:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:42.637 13:04:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:10:42.637 13:04:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:10:42.637 13:04:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:10:42.637 13:04:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:10:42.637 13:04:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:42.637 13:04:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:42.637 13:04:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:42.637 13:04:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:42.637 13:04:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:42.637 13:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:10:42.637 13:04:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:42.637 13:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:42.637 13:04:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:42.637 13:04:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:42.637 13:04:49 -- common/autotest_common.sh@1543 -- # continue 00:10:42.637 13:04:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:42.637 13:04:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.637 13:04:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.637 13:04:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:42.637 13:04:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.637 13:04:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.637 13:04:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:43.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.459 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.459 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.459 13:04:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:43.459 13:04:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:43.459 13:04:49 -- common/autotest_common.sh@10 -- # set +x 00:10:43.459 13:04:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:43.459 13:04:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:10:43.459 13:04:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:10:43.459 13:04:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:10:43.459 13:04:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:10:43.459 13:04:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:10:43.459 13:04:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:10:43.459 13:04:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:10:43.459 13:04:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:43.459 13:04:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:43.459 13:04:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:43.459 13:04:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:43.459 13:04:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:43.717 13:04:50 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:10:43.717 13:04:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:43.717 13:04:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:43.717 13:04:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:43.717 13:04:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:43.717 13:04:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:43.717 13:04:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:43.717 13:04:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:43.717 13:04:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:43.717 13:04:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:43.717 13:04:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:10:43.717 13:04:50 -- common/autotest_common.sh@1572 -- # return 0 00:10:43.717 13:04:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:10:43.717 13:04:50 -- common/autotest_common.sh@1580 -- # return 0 00:10:43.717 13:04:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:43.717 13:04:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:43.717 13:04:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:43.717 13:04:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:43.717 13:04:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:43.717 13:04:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.717 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:10:43.717 13:04:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:43.717 13:04:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:43.717 13:04:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.717 13:04:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.717 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:10:43.717 ************************************ 00:10:43.717 START TEST env 00:10:43.717 ************************************ 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:43.717 * Looking for test storage... 00:10:43.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1711 -- # lcov --version 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:43.717 13:04:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.717 13:04:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.717 13:04:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.717 13:04:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.717 13:04:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.717 13:04:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.717 13:04:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.717 13:04:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.717 13:04:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.717 13:04:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.717 13:04:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.717 13:04:50 env -- scripts/common.sh@344 -- # case "$op" in 00:10:43.717 13:04:50 env -- scripts/common.sh@345 -- # : 1 00:10:43.717 13:04:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.717 13:04:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.717 13:04:50 env -- scripts/common.sh@365 -- # decimal 1 00:10:43.717 13:04:50 env -- scripts/common.sh@353 -- # local d=1 00:10:43.717 13:04:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.717 13:04:50 env -- scripts/common.sh@355 -- # echo 1 00:10:43.717 13:04:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.717 13:04:50 env -- scripts/common.sh@366 -- # decimal 2 00:10:43.717 13:04:50 env -- scripts/common.sh@353 -- # local d=2 00:10:43.717 13:04:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.717 13:04:50 env -- scripts/common.sh@355 -- # echo 2 00:10:43.717 13:04:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.717 13:04:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.717 13:04:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.717 13:04:50 env -- scripts/common.sh@368 -- # return 0 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:43.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.717 --rc genhtml_branch_coverage=1 00:10:43.717 --rc genhtml_function_coverage=1 00:10:43.717 --rc genhtml_legend=1 00:10:43.717 --rc geninfo_all_blocks=1 00:10:43.717 --rc geninfo_unexecuted_blocks=1 00:10:43.717 00:10:43.717 ' 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:43.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.717 --rc genhtml_branch_coverage=1 00:10:43.717 --rc genhtml_function_coverage=1 00:10:43.717 --rc genhtml_legend=1 00:10:43.717 --rc geninfo_all_blocks=1 00:10:43.717 --rc geninfo_unexecuted_blocks=1 00:10:43.717 00:10:43.717 ' 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:43.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.717 --rc genhtml_branch_coverage=1 00:10:43.717 --rc genhtml_function_coverage=1 00:10:43.717 --rc genhtml_legend=1 00:10:43.717 --rc geninfo_all_blocks=1 00:10:43.717 --rc geninfo_unexecuted_blocks=1 00:10:43.717 00:10:43.717 ' 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:43.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.717 --rc genhtml_branch_coverage=1 00:10:43.717 --rc genhtml_function_coverage=1 00:10:43.717 --rc genhtml_legend=1 00:10:43.717 --rc geninfo_all_blocks=1 00:10:43.717 --rc geninfo_unexecuted_blocks=1 00:10:43.717 00:10:43.717 ' 00:10:43.717 13:04:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.717 13:04:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.717 13:04:50 env -- common/autotest_common.sh@10 -- # set +x 00:10:43.717 ************************************ 00:10:43.717 START TEST env_memory 00:10:43.717 ************************************ 00:10:43.717 13:04:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:43.975 00:10:43.975 00:10:43.975 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.975 http://cunit.sourceforge.net/ 00:10:43.975 00:10:43.975 00:10:43.975 Suite: memory 00:10:43.975 Test: alloc and free memory map ...[2024-12-06 13:04:50.284369] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:43.975 passed 00:10:43.975 Test: mem map translation ...[2024-12-06 13:04:50.315007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:43.975 [2024-12-06 13:04:50.315063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:43.975 [2024-12-06 13:04:50.315118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:43.976 [2024-12-06 13:04:50.315129] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:43.976 passed 00:10:43.976 Test: mem map registration ...[2024-12-06 13:04:50.379908] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:43.976 [2024-12-06 13:04:50.379968] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:43.976 passed 00:10:43.976 Test: mem map adjacent registrations ...passed 00:10:43.976 00:10:43.976 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.976 suites 1 1 n/a 0 0 00:10:43.976 tests 4 4 4 0 0 00:10:43.976 asserts 152 152 152 0 n/a 00:10:43.976 00:10:43.976 Elapsed time = 0.214 seconds 00:10:43.976 00:10:43.976 real 0m0.233s 00:10:43.976 user 0m0.218s 00:10:43.976 sys 0m0.012s 00:10:43.976 13:04:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.976 13:04:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:43.976 ************************************ 00:10:43.976 END TEST env_memory 00:10:43.976 ************************************ 00:10:44.234 13:04:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:44.234 13:04:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.234 13:04:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.234 13:04:50 env -- common/autotest_common.sh@10 -- # set +x 00:10:44.234 ************************************ 00:10:44.234 START TEST env_vtophys 00:10:44.234 ************************************ 00:10:44.234 13:04:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:44.234 EAL: lib.eal log level changed from notice to debug 00:10:44.234 EAL: Detected lcore 0 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 1 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 2 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 3 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 4 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 5 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 6 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 7 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 8 as core 0 on socket 0 00:10:44.234 EAL: Detected lcore 9 as core 0 on socket 0 00:10:44.234 EAL: Maximum logical cores by configuration: 128 00:10:44.234 EAL: Detected CPU lcores: 10 00:10:44.234 EAL: Detected NUMA nodes: 1 00:10:44.234 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:44.234 EAL: Detected shared linkage of DPDK 00:10:44.234 EAL: No shared files mode enabled, IPC will be disabled 00:10:44.234 EAL: Selected IOVA mode 'PA' 00:10:44.234 EAL: Probing VFIO support... 00:10:44.234 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:44.234 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:44.234 EAL: Ask a virtual area of 0x2e000 bytes 00:10:44.234 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:44.234 EAL: Setting up physically contiguous memory... 00:10:44.234 EAL: Setting maximum number of open files to 524288 00:10:44.234 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:44.234 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:44.234 EAL: Ask a virtual area of 0x61000 bytes 00:10:44.234 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:44.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:44.234 EAL: Ask a virtual area of 0x400000000 bytes 00:10:44.235 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:44.235 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:44.235 EAL: Ask a virtual area of 0x61000 bytes 00:10:44.235 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:44.235 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:44.235 EAL: Ask a virtual area of 0x400000000 bytes 00:10:44.235 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:44.235 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:44.235 EAL: Ask a virtual area of 0x61000 bytes 00:10:44.235 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:44.235 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:44.235 EAL: Ask a virtual area of 0x400000000 bytes 00:10:44.235 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:44.235 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:44.235 EAL: Ask a virtual area of 0x61000 bytes 00:10:44.235 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:44.235 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:44.235 EAL: Ask a virtual area of 0x400000000 bytes 00:10:44.235 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:44.235 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:44.235 EAL: Hugepages will be freed exactly as allocated. 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: TSC frequency is ~2200000 KHz 00:10:44.235 EAL: Main lcore 0 is ready (tid=7f9e97e98a00;cpuset=[0]) 00:10:44.235 EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 0 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 2MB 00:10:44.235 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:44.235 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:44.235 EAL: Mem event callback 'spdk:(nil)' registered 00:10:44.235 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:44.235 00:10:44.235 00:10:44.235 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.235 http://cunit.sourceforge.net/ 00:10:44.235 00:10:44.235 00:10:44.235 Suite: components_suite 00:10:44.235 Test: vtophys_malloc_test ...passed 00:10:44.235 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 4 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 4MB 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was shrunk by 4MB 00:10:44.235 EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 4 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 6MB 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was shrunk by 6MB 00:10:44.235 EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 4 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 10MB 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was shrunk by 10MB 00:10:44.235 EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 4 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 18MB 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was shrunk by 18MB 00:10:44.235 EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 4 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 34MB 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was shrunk by 34MB 00:10:44.235 EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 4 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 66MB 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was shrunk by 66MB 00:10:44.235 EAL: Trying to obtain current memory policy. 00:10:44.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.235 EAL: Restoring previous memory policy: 4 00:10:44.235 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.235 EAL: request: mp_malloc_sync 00:10:44.235 EAL: No shared files mode enabled, IPC is disabled 00:10:44.235 EAL: Heap on socket 0 was expanded by 130MB 00:10:44.493 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.493 EAL: request: mp_malloc_sync 00:10:44.493 EAL: No shared files mode enabled, IPC is disabled 00:10:44.493 EAL: Heap on socket 0 was shrunk by 130MB 00:10:44.493 EAL: Trying to obtain current memory policy. 00:10:44.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.493 EAL: Restoring previous memory policy: 4 00:10:44.493 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.493 EAL: request: mp_malloc_sync 00:10:44.493 EAL: No shared files mode enabled, IPC is disabled 00:10:44.493 EAL: Heap on socket 0 was expanded by 258MB 00:10:44.493 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.493 EAL: request: mp_malloc_sync 00:10:44.493 EAL: No shared files mode enabled, IPC is disabled 00:10:44.493 EAL: Heap on socket 0 was shrunk by 258MB 00:10:44.493 EAL: Trying to obtain current memory policy. 00:10:44.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.494 EAL: Restoring previous memory policy: 4 00:10:44.494 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.494 EAL: request: mp_malloc_sync 00:10:44.494 EAL: No shared files mode enabled, IPC is disabled 00:10:44.494 EAL: Heap on socket 0 was expanded by 514MB 00:10:44.750 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.750 EAL: request: mp_malloc_sync 00:10:44.750 EAL: No shared files mode enabled, IPC is disabled 00:10:44.750 EAL: Heap on socket 0 was shrunk by 514MB 00:10:44.750 EAL: Trying to obtain current memory policy. 00:10:44.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:44.750 EAL: Restoring previous memory policy: 4 00:10:44.750 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.750 EAL: request: mp_malloc_sync 00:10:44.750 EAL: No shared files mode enabled, IPC is disabled 00:10:44.750 EAL: Heap on socket 0 was expanded by 1026MB 00:10:45.007 EAL: Calling mem event callback 'spdk:(nil)' 00:10:45.007 EAL: request: mp_malloc_sync 00:10:45.007 EAL: No shared files mode enabled, IPC is disabled 00:10:45.007 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:45.007 passed 00:10:45.007 00:10:45.007 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.007 suites 1 1 n/a 0 0 00:10:45.007 tests 2 2 2 0 0 00:10:45.007 asserts 5533 5533 5533 0 n/a 00:10:45.007 00:10:45.007 Elapsed time = 0.733 seconds 00:10:45.007 EAL: Calling mem event callback 'spdk:(nil)' 00:10:45.007 EAL: request: mp_malloc_sync 00:10:45.007 EAL: No shared files mode enabled, IPC is disabled 00:10:45.007 EAL: Heap on socket 0 was shrunk by 2MB 00:10:45.007 EAL: No shared files mode enabled, IPC is disabled 00:10:45.007 EAL: No shared files mode enabled, IPC is disabled 00:10:45.007 EAL: No shared files mode enabled, IPC is disabled 00:10:45.007 00:10:45.007 real 0m0.943s 00:10:45.007 user 0m0.489s 00:10:45.007 sys 0m0.324s 00:10:45.007 13:04:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.007 13:04:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 ************************************ 00:10:45.007 END TEST env_vtophys 00:10:45.007 ************************************ 00:10:45.007 13:04:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:45.007 13:04:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.007 13:04:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.007 13:04:51 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 ************************************ 00:10:45.007 START TEST env_pci 00:10:45.007 ************************************ 00:10:45.007 13:04:51 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:45.007 00:10:45.007 00:10:45.007 CUnit - A unit testing framework for C - Version 2.1-3 00:10:45.007 http://cunit.sourceforge.net/ 00:10:45.007 00:10:45.007 00:10:45.007 Suite: pci 00:10:45.007 Test: pci_hook ...[2024-12-06 13:04:51.521492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58941 has claimed it 00:10:45.007 passed 00:10:45.007 00:10:45.007 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.007 suites 1 1 n/a 0 0 00:10:45.007 tests 1 1 1 0 0 00:10:45.007 asserts 25 25 25 0 n/a 00:10:45.007 00:10:45.007 Elapsed time = 0.002 seconds 00:10:45.007 EAL: Cannot find device (10000:00:01.0) 00:10:45.007 EAL: Failed to attach device on primary process 00:10:45.007 ************************************ 00:10:45.007 END TEST env_pci 00:10:45.007 ************************************ 00:10:45.007 00:10:45.007 real 0m0.019s 00:10:45.007 user 0m0.008s 00:10:45.007 sys 0m0.011s 00:10:45.007 13:04:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.007 13:04:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:45.264 13:04:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:45.264 13:04:51 env -- env/env.sh@15 -- # uname 00:10:45.264 13:04:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:45.264 13:04:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:45.264 13:04:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:45.264 13:04:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.264 13:04:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.264 13:04:51 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.264 ************************************ 00:10:45.264 START TEST env_dpdk_post_init 00:10:45.264 ************************************ 00:10:45.264 13:04:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:45.264 EAL: Detected CPU lcores: 10 00:10:45.264 EAL: Detected NUMA nodes: 1 00:10:45.264 EAL: Detected shared linkage of DPDK 00:10:45.264 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:45.264 EAL: Selected IOVA mode 'PA' 00:10:45.264 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:45.264 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:45.264 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:45.264 Starting DPDK initialization... 00:10:45.264 Starting SPDK post initialization... 00:10:45.264 SPDK NVMe probe 00:10:45.264 Attaching to 0000:00:10.0 00:10:45.264 Attaching to 0000:00:11.0 00:10:45.264 Attached to 0000:00:10.0 00:10:45.264 Attached to 0000:00:11.0 00:10:45.264 Cleaning up... 00:10:45.264 00:10:45.264 real 0m0.189s 00:10:45.264 user 0m0.055s 00:10:45.264 sys 0m0.033s 00:10:45.264 13:04:51 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.264 ************************************ 00:10:45.264 END TEST env_dpdk_post_init 00:10:45.264 ************************************ 00:10:45.264 13:04:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:45.522 13:04:51 env -- env/env.sh@26 -- # uname 00:10:45.522 13:04:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:45.522 13:04:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:45.522 13:04:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.522 13:04:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.522 13:04:51 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.522 ************************************ 00:10:45.522 START TEST env_mem_callbacks 00:10:45.522 ************************************ 00:10:45.522 13:04:51 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:45.522 EAL: Detected CPU lcores: 10 00:10:45.522 EAL: Detected NUMA nodes: 1 00:10:45.522 EAL: Detected shared linkage of DPDK 00:10:45.522 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:45.522 EAL: Selected IOVA mode 'PA' 00:10:45.522 00:10:45.522 00:10:45.522 CUnit - A unit testing framework for C - Version 2.1-3 00:10:45.522 http://cunit.sourceforge.net/ 00:10:45.522 00:10:45.522 00:10:45.522 Suite: memory 00:10:45.522 Test: test ... 00:10:45.522 register 0x200000200000 2097152 00:10:45.522 malloc 3145728 00:10:45.522 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:45.522 register 0x200000400000 4194304 00:10:45.522 buf 0x200000500000 len 3145728 PASSED 00:10:45.522 malloc 64 00:10:45.522 buf 0x2000004fff40 len 64 PASSED 00:10:45.522 malloc 4194304 00:10:45.522 register 0x200000800000 6291456 00:10:45.522 buf 0x200000a00000 len 4194304 PASSED 00:10:45.522 free 0x200000500000 3145728 00:10:45.522 free 0x2000004fff40 64 00:10:45.522 unregister 0x200000400000 4194304 PASSED 00:10:45.522 free 0x200000a00000 4194304 00:10:45.522 unregister 0x200000800000 6291456 PASSED 00:10:45.522 malloc 8388608 00:10:45.522 register 0x200000400000 10485760 00:10:45.522 buf 0x200000600000 len 8388608 PASSED 00:10:45.522 free 0x200000600000 8388608 00:10:45.522 unregister 0x200000400000 10485760 PASSED 00:10:45.522 passed 00:10:45.522 00:10:45.522 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.522 suites 1 1 n/a 0 0 00:10:45.522 tests 1 1 1 0 0 00:10:45.522 asserts 15 15 15 0 n/a 00:10:45.522 00:10:45.522 Elapsed time = 0.008 seconds 00:10:45.522 00:10:45.522 real 0m0.140s 00:10:45.522 user 0m0.018s 00:10:45.522 sys 0m0.021s 00:10:45.522 13:04:51 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.522 13:04:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:45.522 ************************************ 00:10:45.522 END TEST env_mem_callbacks 00:10:45.523 ************************************ 00:10:45.523 00:10:45.523 real 0m1.955s 00:10:45.523 user 0m0.974s 00:10:45.523 sys 0m0.636s 00:10:45.523 13:04:51 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.523 13:04:51 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.523 ************************************ 00:10:45.523 END TEST env 00:10:45.523 ************************************ 00:10:45.523 13:04:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:45.523 13:04:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.523 13:04:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.523 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:10:45.523 ************************************ 00:10:45.523 START TEST rpc 00:10:45.523 ************************************ 00:10:45.523 13:04:52 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:45.781 * Looking for test storage... 00:10:45.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.781 13:04:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.781 13:04:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.781 13:04:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.781 13:04:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.781 13:04:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.781 13:04:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.781 13:04:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.781 13:04:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:45.781 13:04:52 rpc -- scripts/common.sh@345 -- # : 1 00:10:45.781 13:04:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.781 13:04:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.781 13:04:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:45.781 13:04:52 rpc -- scripts/common.sh@353 -- # local d=1 00:10:45.781 13:04:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.781 13:04:52 rpc -- scripts/common.sh@355 -- # echo 1 00:10:45.781 13:04:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.781 13:04:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@353 -- # local d=2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.781 13:04:52 rpc -- scripts/common.sh@355 -- # echo 2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.781 13:04:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.781 13:04:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.781 13:04:52 rpc -- scripts/common.sh@368 -- # return 0 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.781 --rc genhtml_branch_coverage=1 00:10:45.781 --rc genhtml_function_coverage=1 00:10:45.781 --rc genhtml_legend=1 00:10:45.781 --rc geninfo_all_blocks=1 00:10:45.781 --rc geninfo_unexecuted_blocks=1 00:10:45.781 00:10:45.781 ' 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.781 --rc genhtml_branch_coverage=1 00:10:45.781 --rc genhtml_function_coverage=1 00:10:45.781 --rc genhtml_legend=1 00:10:45.781 --rc geninfo_all_blocks=1 00:10:45.781 --rc geninfo_unexecuted_blocks=1 00:10:45.781 00:10:45.781 ' 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.781 --rc genhtml_branch_coverage=1 00:10:45.781 --rc genhtml_function_coverage=1 00:10:45.781 --rc genhtml_legend=1 00:10:45.781 --rc geninfo_all_blocks=1 00:10:45.781 --rc geninfo_unexecuted_blocks=1 00:10:45.781 00:10:45.781 ' 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.781 --rc genhtml_branch_coverage=1 00:10:45.781 --rc genhtml_function_coverage=1 00:10:45.781 --rc genhtml_legend=1 00:10:45.781 --rc geninfo_all_blocks=1 00:10:45.781 --rc geninfo_unexecuted_blocks=1 00:10:45.781 00:10:45.781 ' 00:10:45.781 13:04:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59059 00:10:45.781 13:04:52 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:45.781 13:04:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:45.781 13:04:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59059 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@835 -- # '[' -z 59059 ']' 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.781 13:04:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.781 [2024-12-06 13:04:52.299764] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:45.781 [2024-12-06 13:04:52.300114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:10:46.039 [2024-12-06 13:04:52.448827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.039 [2024-12-06 13:04:52.490122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:46.039 [2024-12-06 13:04:52.490380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59059' to capture a snapshot of events at runtime. 00:10:46.039 [2024-12-06 13:04:52.490418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.039 [2024-12-06 13:04:52.490436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.039 [2024-12-06 13:04:52.490449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59059 for offline analysis/debug. 00:10:46.039 [2024-12-06 13:04:52.490895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.298 13:04:52 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.298 13:04:52 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:46.298 13:04:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:46.298 13:04:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:46.298 13:04:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:46.298 13:04:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:46.298 13:04:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.298 13:04:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.298 13:04:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.298 ************************************ 00:10:46.298 START TEST rpc_integrity 00:10:46.298 ************************************ 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.298 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.298 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:46.298 { 00:10:46.298 "aliases": [ 00:10:46.298 "e7dd2adf-76e9-40d2-b3e6-498ea1f9fd4d" 00:10:46.298 ], 00:10:46.298 "assigned_rate_limits": { 00:10:46.298 "r_mbytes_per_sec": 0, 00:10:46.298 "rw_ios_per_sec": 0, 00:10:46.298 "rw_mbytes_per_sec": 0, 00:10:46.298 "w_mbytes_per_sec": 0 00:10:46.298 }, 00:10:46.298 "block_size": 512, 00:10:46.298 "claimed": false, 00:10:46.298 "driver_specific": {}, 00:10:46.298 "memory_domains": [ 00:10:46.298 { 00:10:46.298 "dma_device_id": "system", 00:10:46.298 "dma_device_type": 1 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.298 "dma_device_type": 2 00:10:46.298 } 00:10:46.298 ], 00:10:46.299 "name": "Malloc0", 00:10:46.299 "num_blocks": 16384, 00:10:46.299 "product_name": "Malloc disk", 00:10:46.299 "supported_io_types": { 00:10:46.299 "abort": true, 00:10:46.299 "compare": false, 00:10:46.299 "compare_and_write": false, 00:10:46.299 "copy": true, 00:10:46.299 "flush": true, 00:10:46.299 "get_zone_info": false, 00:10:46.299 "nvme_admin": false, 00:10:46.299 "nvme_io": false, 00:10:46.299 "nvme_io_md": false, 00:10:46.299 "nvme_iov_md": false, 00:10:46.299 "read": true, 00:10:46.299 "reset": true, 00:10:46.299 "seek_data": false, 00:10:46.299 "seek_hole": false, 00:10:46.299 "unmap": true, 00:10:46.299 "write": true, 00:10:46.299 "write_zeroes": true, 00:10:46.299 "zcopy": true, 00:10:46.299 "zone_append": false, 00:10:46.299 "zone_management": false 00:10:46.299 }, 00:10:46.299 "uuid": "e7dd2adf-76e9-40d2-b3e6-498ea1f9fd4d", 00:10:46.299 "zoned": false 00:10:46.299 } 00:10:46.299 ]' 00:10:46.299 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 [2024-12-06 13:04:52.849514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:46.558 [2024-12-06 13:04:52.849694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.558 [2024-12-06 13:04:52.849736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2139440 00:10:46.558 [2024-12-06 13:04:52.849752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.558 [2024-12-06 13:04:52.851449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.558 [2024-12-06 13:04:52.851501] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:46.558 Passthru0 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:46.558 { 00:10:46.558 "aliases": [ 00:10:46.558 "e7dd2adf-76e9-40d2-b3e6-498ea1f9fd4d" 00:10:46.558 ], 00:10:46.558 "assigned_rate_limits": { 00:10:46.558 "r_mbytes_per_sec": 0, 00:10:46.558 "rw_ios_per_sec": 0, 00:10:46.558 "rw_mbytes_per_sec": 0, 00:10:46.558 "w_mbytes_per_sec": 0 00:10:46.558 }, 00:10:46.558 "block_size": 512, 00:10:46.558 "claim_type": "exclusive_write", 00:10:46.558 "claimed": true, 00:10:46.558 "driver_specific": {}, 00:10:46.558 "memory_domains": [ 00:10:46.558 { 00:10:46.558 "dma_device_id": "system", 00:10:46.558 "dma_device_type": 1 00:10:46.558 }, 00:10:46.558 { 00:10:46.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.558 "dma_device_type": 2 00:10:46.558 } 00:10:46.558 ], 00:10:46.558 "name": "Malloc0", 00:10:46.558 "num_blocks": 16384, 00:10:46.558 "product_name": "Malloc disk", 00:10:46.558 "supported_io_types": { 00:10:46.558 "abort": true, 00:10:46.558 "compare": false, 00:10:46.558 "compare_and_write": false, 00:10:46.558 "copy": true, 00:10:46.558 "flush": true, 00:10:46.558 "get_zone_info": false, 00:10:46.558 "nvme_admin": false, 00:10:46.558 "nvme_io": false, 00:10:46.558 "nvme_io_md": false, 00:10:46.558 "nvme_iov_md": false, 00:10:46.558 "read": true, 00:10:46.558 "reset": true, 00:10:46.558 "seek_data": false, 00:10:46.558 "seek_hole": false, 00:10:46.558 "unmap": true, 00:10:46.558 "write": true, 00:10:46.558 "write_zeroes": true, 00:10:46.558 "zcopy": true, 00:10:46.558 "zone_append": false, 00:10:46.558 "zone_management": false 00:10:46.558 }, 00:10:46.558 "uuid": "e7dd2adf-76e9-40d2-b3e6-498ea1f9fd4d", 00:10:46.558 "zoned": false 00:10:46.558 }, 00:10:46.558 { 00:10:46.558 "aliases": [ 00:10:46.558 "02c01c0d-737e-525e-a788-222084e19c4b" 00:10:46.558 ], 00:10:46.558 "assigned_rate_limits": { 00:10:46.558 "r_mbytes_per_sec": 0, 00:10:46.558 "rw_ios_per_sec": 0, 00:10:46.558 "rw_mbytes_per_sec": 0, 00:10:46.558 "w_mbytes_per_sec": 0 00:10:46.558 }, 00:10:46.558 "block_size": 512, 00:10:46.558 "claimed": false, 00:10:46.558 "driver_specific": { 00:10:46.558 "passthru": { 00:10:46.558 "base_bdev_name": "Malloc0", 00:10:46.558 "name": "Passthru0" 00:10:46.558 } 00:10:46.558 }, 00:10:46.558 "memory_domains": [ 00:10:46.558 { 00:10:46.558 "dma_device_id": "system", 00:10:46.558 "dma_device_type": 1 00:10:46.558 }, 00:10:46.558 { 00:10:46.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.558 "dma_device_type": 2 00:10:46.558 } 00:10:46.558 ], 00:10:46.558 "name": "Passthru0", 00:10:46.558 "num_blocks": 16384, 00:10:46.558 "product_name": "passthru", 00:10:46.558 "supported_io_types": { 00:10:46.558 "abort": true, 00:10:46.558 "compare": false, 00:10:46.558 "compare_and_write": false, 00:10:46.558 "copy": true, 00:10:46.558 "flush": true, 00:10:46.558 "get_zone_info": false, 00:10:46.558 "nvme_admin": false, 00:10:46.558 "nvme_io": false, 00:10:46.558 "nvme_io_md": false, 00:10:46.558 "nvme_iov_md": false, 00:10:46.558 "read": true, 00:10:46.558 "reset": true, 00:10:46.558 "seek_data": false, 00:10:46.558 "seek_hole": false, 00:10:46.558 "unmap": true, 00:10:46.558 "write": true, 00:10:46.558 "write_zeroes": true, 00:10:46.558 "zcopy": true, 00:10:46.558 "zone_append": false, 00:10:46.558 "zone_management": false 00:10:46.558 }, 00:10:46.558 "uuid": "02c01c0d-737e-525e-a788-222084e19c4b", 00:10:46.558 "zoned": false 00:10:46.558 } 00:10:46.558 ]' 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 13:04:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:46.558 13:04:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:46.558 ************************************ 00:10:46.558 END TEST rpc_integrity 00:10:46.558 ************************************ 00:10:46.558 13:04:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:46.558 00:10:46.558 real 0m0.329s 00:10:46.559 user 0m0.222s 00:10:46.559 sys 0m0.034s 00:10:46.559 13:04:53 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.559 13:04:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:46.559 13:04:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:46.559 13:04:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.559 13:04:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.559 13:04:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.559 ************************************ 00:10:46.559 START TEST rpc_plugins 00:10:46.559 ************************************ 00:10:46.559 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:46.559 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:46.559 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.559 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:46.559 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.559 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:46.559 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:46.559 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.559 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:46.817 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.817 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:46.817 { 00:10:46.817 "aliases": [ 00:10:46.817 "2f15b410-574e-4a48-96bb-e8bef0dfa990" 00:10:46.817 ], 00:10:46.817 "assigned_rate_limits": { 00:10:46.817 "r_mbytes_per_sec": 0, 00:10:46.817 "rw_ios_per_sec": 0, 00:10:46.817 "rw_mbytes_per_sec": 0, 00:10:46.817 "w_mbytes_per_sec": 0 00:10:46.817 }, 00:10:46.817 "block_size": 4096, 00:10:46.817 "claimed": false, 00:10:46.817 "driver_specific": {}, 00:10:46.817 "memory_domains": [ 00:10:46.817 { 00:10:46.817 "dma_device_id": "system", 00:10:46.817 "dma_device_type": 1 00:10:46.817 }, 00:10:46.817 { 00:10:46.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.817 "dma_device_type": 2 00:10:46.817 } 00:10:46.817 ], 00:10:46.817 "name": "Malloc1", 00:10:46.817 "num_blocks": 256, 00:10:46.817 "product_name": "Malloc disk", 00:10:46.818 "supported_io_types": { 00:10:46.818 "abort": true, 00:10:46.818 "compare": false, 00:10:46.818 "compare_and_write": false, 00:10:46.818 "copy": true, 00:10:46.818 "flush": true, 00:10:46.818 "get_zone_info": false, 00:10:46.818 "nvme_admin": false, 00:10:46.818 "nvme_io": false, 00:10:46.818 "nvme_io_md": false, 00:10:46.818 "nvme_iov_md": false, 00:10:46.818 "read": true, 00:10:46.818 "reset": true, 00:10:46.818 "seek_data": false, 00:10:46.818 "seek_hole": false, 00:10:46.818 "unmap": true, 00:10:46.818 "write": true, 00:10:46.818 "write_zeroes": true, 00:10:46.818 "zcopy": true, 00:10:46.818 "zone_append": false, 00:10:46.818 "zone_management": false 00:10:46.818 }, 00:10:46.818 "uuid": "2f15b410-574e-4a48-96bb-e8bef0dfa990", 00:10:46.818 "zoned": false 00:10:46.818 } 00:10:46.818 ]' 00:10:46.818 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:46.818 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:46.818 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.818 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.818 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:46.818 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:46.818 13:04:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:46.818 00:10:46.818 real 0m0.160s 00:10:46.818 user 0m0.103s 00:10:46.818 sys 0m0.020s 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.818 13:04:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:46.818 ************************************ 00:10:46.818 END TEST rpc_plugins 00:10:46.818 ************************************ 00:10:46.818 13:04:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:46.818 13:04:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.818 13:04:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.818 13:04:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.818 ************************************ 00:10:46.818 START TEST rpc_trace_cmd_test 00:10:46.818 ************************************ 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:46.818 "bdev": { 00:10:46.818 "mask": "0x8", 00:10:46.818 "tpoint_mask": "0xffffffffffffffff" 00:10:46.818 }, 00:10:46.818 "bdev_nvme": { 00:10:46.818 "mask": "0x4000", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "bdev_raid": { 00:10:46.818 "mask": "0x20000", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "blob": { 00:10:46.818 "mask": "0x10000", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "blobfs": { 00:10:46.818 "mask": "0x80", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "dsa": { 00:10:46.818 "mask": "0x200", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "ftl": { 00:10:46.818 "mask": "0x40", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "iaa": { 00:10:46.818 "mask": "0x1000", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "iscsi_conn": { 00:10:46.818 "mask": "0x2", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "nvme_pcie": { 00:10:46.818 "mask": "0x800", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "nvme_tcp": { 00:10:46.818 "mask": "0x2000", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "nvmf_rdma": { 00:10:46.818 "mask": "0x10", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "nvmf_tcp": { 00:10:46.818 "mask": "0x20", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "scheduler": { 00:10:46.818 "mask": "0x40000", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "scsi": { 00:10:46.818 "mask": "0x4", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "sock": { 00:10:46.818 "mask": "0x8000", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "thread": { 00:10:46.818 "mask": "0x400", 00:10:46.818 "tpoint_mask": "0x0" 00:10:46.818 }, 00:10:46.818 "tpoint_group_mask": "0x8", 00:10:46.818 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59059" 00:10:46.818 }' 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:46.818 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:47.076 ************************************ 00:10:47.076 END TEST rpc_trace_cmd_test 00:10:47.076 ************************************ 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:47.076 00:10:47.076 real 0m0.270s 00:10:47.076 user 0m0.232s 00:10:47.076 sys 0m0.028s 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.076 13:04:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 13:04:53 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:10:47.076 13:04:53 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:10:47.076 13:04:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.076 13:04:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.076 13:04:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 ************************************ 00:10:47.076 START TEST go_rpc 00:10:47.076 ************************************ 00:10:47.076 13:04:53 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:10:47.076 13:04:53 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["1da2db3b-6235-432d-b7ef-7702ef8bb818"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"1da2db3b-6235-432d-b7ef-7702ef8bb818","zoned":false}]' 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:10:47.334 13:04:53 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:10:47.334 00:10:47.334 real 0m0.245s 00:10:47.334 user 0m0.172s 00:10:47.334 sys 0m0.038s 00:10:47.334 ************************************ 00:10:47.334 END TEST go_rpc 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.334 13:04:53 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.334 ************************************ 00:10:47.592 13:04:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:47.592 13:04:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:47.592 13:04:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.592 13:04:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.592 13:04:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.592 ************************************ 00:10:47.592 START TEST rpc_daemon_integrity 00:10:47.592 ************************************ 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.592 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:47.592 { 00:10:47.592 "aliases": [ 00:10:47.592 "5b38c7c9-1d1e-4731-b954-68866d07f90e" 00:10:47.592 ], 00:10:47.592 "assigned_rate_limits": { 00:10:47.592 "r_mbytes_per_sec": 0, 00:10:47.592 "rw_ios_per_sec": 0, 00:10:47.592 "rw_mbytes_per_sec": 0, 00:10:47.592 "w_mbytes_per_sec": 0 00:10:47.592 }, 00:10:47.592 "block_size": 512, 00:10:47.592 "claimed": false, 00:10:47.592 "driver_specific": {}, 00:10:47.593 "memory_domains": [ 00:10:47.593 { 00:10:47.593 "dma_device_id": "system", 00:10:47.593 "dma_device_type": 1 00:10:47.593 }, 00:10:47.593 { 00:10:47.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.593 "dma_device_type": 2 00:10:47.593 } 00:10:47.593 ], 00:10:47.593 "name": "Malloc3", 00:10:47.593 "num_blocks": 16384, 00:10:47.593 "product_name": "Malloc disk", 00:10:47.593 "supported_io_types": { 00:10:47.593 "abort": true, 00:10:47.593 "compare": false, 00:10:47.593 "compare_and_write": false, 00:10:47.593 "copy": true, 00:10:47.593 "flush": true, 00:10:47.593 "get_zone_info": false, 00:10:47.593 "nvme_admin": false, 00:10:47.593 "nvme_io": false, 00:10:47.593 "nvme_io_md": false, 00:10:47.593 "nvme_iov_md": false, 00:10:47.593 "read": true, 00:10:47.593 "reset": true, 00:10:47.593 "seek_data": false, 00:10:47.593 "seek_hole": false, 00:10:47.593 "unmap": true, 00:10:47.593 "write": true, 00:10:47.593 "write_zeroes": true, 00:10:47.593 "zcopy": true, 00:10:47.593 "zone_append": false, 00:10:47.593 "zone_management": false 00:10:47.593 }, 00:10:47.593 "uuid": "5b38c7c9-1d1e-4731-b954-68866d07f90e", 00:10:47.593 "zoned": false 00:10:47.593 } 00:10:47.593 ]' 00:10:47.593 13:04:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.593 [2024-12-06 13:04:54.021931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:47.593 [2024-12-06 13:04:54.021992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.593 [2024-12-06 13:04:54.022015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2126be0 00:10:47.593 [2024-12-06 13:04:54.022025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.593 [2024-12-06 13:04:54.023545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.593 [2024-12-06 13:04:54.023581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:47.593 Passthru0 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:47.593 { 00:10:47.593 "aliases": [ 00:10:47.593 "5b38c7c9-1d1e-4731-b954-68866d07f90e" 00:10:47.593 ], 00:10:47.593 "assigned_rate_limits": { 00:10:47.593 "r_mbytes_per_sec": 0, 00:10:47.593 "rw_ios_per_sec": 0, 00:10:47.593 "rw_mbytes_per_sec": 0, 00:10:47.593 "w_mbytes_per_sec": 0 00:10:47.593 }, 00:10:47.593 "block_size": 512, 00:10:47.593 "claim_type": "exclusive_write", 00:10:47.593 "claimed": true, 00:10:47.593 "driver_specific": {}, 00:10:47.593 "memory_domains": [ 00:10:47.593 { 00:10:47.593 "dma_device_id": "system", 00:10:47.593 "dma_device_type": 1 00:10:47.593 }, 00:10:47.593 { 00:10:47.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.593 "dma_device_type": 2 00:10:47.593 } 00:10:47.593 ], 00:10:47.593 "name": "Malloc3", 00:10:47.593 "num_blocks": 16384, 00:10:47.593 "product_name": "Malloc disk", 00:10:47.593 "supported_io_types": { 00:10:47.593 "abort": true, 00:10:47.593 "compare": false, 00:10:47.593 "compare_and_write": false, 00:10:47.593 "copy": true, 00:10:47.593 "flush": true, 00:10:47.593 "get_zone_info": false, 00:10:47.593 "nvme_admin": false, 00:10:47.593 "nvme_io": false, 00:10:47.593 "nvme_io_md": false, 00:10:47.593 "nvme_iov_md": false, 00:10:47.593 "read": true, 00:10:47.593 "reset": true, 00:10:47.593 "seek_data": false, 00:10:47.593 "seek_hole": false, 00:10:47.593 "unmap": true, 00:10:47.593 "write": true, 00:10:47.593 "write_zeroes": true, 00:10:47.593 "zcopy": true, 00:10:47.593 "zone_append": false, 00:10:47.593 "zone_management": false 00:10:47.593 }, 00:10:47.593 "uuid": "5b38c7c9-1d1e-4731-b954-68866d07f90e", 00:10:47.593 "zoned": false 00:10:47.593 }, 00:10:47.593 { 00:10:47.593 "aliases": [ 00:10:47.593 "bf2d2902-7435-507a-bf89-398538539197" 00:10:47.593 ], 00:10:47.593 "assigned_rate_limits": { 00:10:47.593 "r_mbytes_per_sec": 0, 00:10:47.593 "rw_ios_per_sec": 0, 00:10:47.593 "rw_mbytes_per_sec": 0, 00:10:47.593 "w_mbytes_per_sec": 0 00:10:47.593 }, 00:10:47.593 "block_size": 512, 00:10:47.593 "claimed": false, 00:10:47.593 "driver_specific": { 00:10:47.593 "passthru": { 00:10:47.593 "base_bdev_name": "Malloc3", 00:10:47.593 "name": "Passthru0" 00:10:47.593 } 00:10:47.593 }, 00:10:47.593 "memory_domains": [ 00:10:47.593 { 00:10:47.593 "dma_device_id": "system", 00:10:47.593 "dma_device_type": 1 00:10:47.593 }, 00:10:47.593 { 00:10:47.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.593 "dma_device_type": 2 00:10:47.593 } 00:10:47.593 ], 00:10:47.593 "name": "Passthru0", 00:10:47.593 "num_blocks": 16384, 00:10:47.593 "product_name": "passthru", 00:10:47.593 "supported_io_types": { 00:10:47.593 "abort": true, 00:10:47.593 "compare": false, 00:10:47.593 "compare_and_write": false, 00:10:47.593 "copy": true, 00:10:47.593 "flush": true, 00:10:47.593 "get_zone_info": false, 00:10:47.593 "nvme_admin": false, 00:10:47.593 "nvme_io": false, 00:10:47.593 "nvme_io_md": false, 00:10:47.593 "nvme_iov_md": false, 00:10:47.593 "read": true, 00:10:47.593 "reset": true, 00:10:47.593 "seek_data": false, 00:10:47.593 "seek_hole": false, 00:10:47.593 "unmap": true, 00:10:47.593 "write": true, 00:10:47.593 "write_zeroes": true, 00:10:47.593 "zcopy": true, 00:10:47.593 "zone_append": false, 00:10:47.593 "zone_management": false 00:10:47.593 }, 00:10:47.593 "uuid": "bf2d2902-7435-507a-bf89-398538539197", 00:10:47.593 "zoned": false 00:10:47.593 } 00:10:47.593 ]' 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.593 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:47.852 00:10:47.852 real 0m0.310s 00:10:47.852 user 0m0.211s 00:10:47.852 sys 0m0.039s 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.852 ************************************ 00:10:47.852 END TEST rpc_daemon_integrity 00:10:47.852 ************************************ 00:10:47.852 13:04:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.852 13:04:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:47.852 13:04:54 rpc -- rpc/rpc.sh@84 -- # killprocess 59059 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@954 -- # '[' -z 59059 ']' 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@958 -- # kill -0 59059 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@959 -- # uname 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59059 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.852 killing process with pid 59059 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59059' 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@973 -- # kill 59059 00:10:47.852 13:04:54 rpc -- common/autotest_common.sh@978 -- # wait 59059 00:10:48.109 00:10:48.109 real 0m2.446s 00:10:48.109 user 0m3.400s 00:10:48.109 sys 0m0.610s 00:10:48.109 13:04:54 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.109 ************************************ 00:10:48.109 END TEST rpc 00:10:48.109 ************************************ 00:10:48.109 13:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.109 13:04:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:48.109 13:04:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.109 13:04:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.109 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:10:48.109 ************************************ 00:10:48.109 START TEST skip_rpc 00:10:48.109 ************************************ 00:10:48.109 13:04:54 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:48.109 * Looking for test storage... 00:10:48.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:48.109 13:04:54 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:48.109 13:04:54 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:48.109 13:04:54 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.367 13:04:54 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:48.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.367 --rc genhtml_branch_coverage=1 00:10:48.367 --rc genhtml_function_coverage=1 00:10:48.367 --rc genhtml_legend=1 00:10:48.367 --rc geninfo_all_blocks=1 00:10:48.367 --rc geninfo_unexecuted_blocks=1 00:10:48.367 00:10:48.367 ' 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:48.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.367 --rc genhtml_branch_coverage=1 00:10:48.367 --rc genhtml_function_coverage=1 00:10:48.367 --rc genhtml_legend=1 00:10:48.367 --rc geninfo_all_blocks=1 00:10:48.367 --rc geninfo_unexecuted_blocks=1 00:10:48.367 00:10:48.367 ' 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:48.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.367 --rc genhtml_branch_coverage=1 00:10:48.367 --rc genhtml_function_coverage=1 00:10:48.367 --rc genhtml_legend=1 00:10:48.367 --rc geninfo_all_blocks=1 00:10:48.367 --rc geninfo_unexecuted_blocks=1 00:10:48.367 00:10:48.367 ' 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:48.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.367 --rc genhtml_branch_coverage=1 00:10:48.367 --rc genhtml_function_coverage=1 00:10:48.367 --rc genhtml_legend=1 00:10:48.367 --rc geninfo_all_blocks=1 00:10:48.367 --rc geninfo_unexecuted_blocks=1 00:10:48.367 00:10:48.367 ' 00:10:48.367 13:04:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:48.367 13:04:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:48.367 13:04:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.367 13:04:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.367 ************************************ 00:10:48.367 START TEST skip_rpc 00:10:48.367 ************************************ 00:10:48.367 13:04:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:48.367 13:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59314 00:10:48.367 13:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:48.367 13:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:48.367 13:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:48.367 [2024-12-06 13:04:54.824349] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:48.367 [2024-12-06 13:04:54.824456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59314 ] 00:10:48.626 [2024-12-06 13:04:54.965623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.626 [2024-12-06 13:04:55.000512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.886 2024/12/06 13:04:59 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59314 00:10:53.886 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59314 ']' 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59314 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59314 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.887 killing process with pid 59314 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59314' 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59314 00:10:53.887 13:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59314 00:10:53.887 00:10:53.887 real 0m5.305s 00:10:53.887 user 0m5.020s 00:10:53.887 sys 0m0.194s 00:10:53.887 ************************************ 00:10:53.887 END TEST skip_rpc 00:10:53.887 ************************************ 00:10:53.887 13:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.887 13:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.887 13:05:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:53.887 13:05:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.887 13:05:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.887 13:05:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.887 ************************************ 00:10:53.887 START TEST skip_rpc_with_json 00:10:53.887 ************************************ 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59402 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59402 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59402 ']' 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.887 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:53.887 [2024-12-06 13:05:00.170668] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:53.887 [2024-12-06 13:05:00.170770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59402 ] 00:10:53.887 [2024-12-06 13:05:00.320719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.887 [2024-12-06 13:05:00.360360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:54.145 [2024-12-06 13:05:00.552988] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:54.145 2024/12/06 13:05:00 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:10:54.145 request: 00:10:54.145 { 00:10:54.145 "method": "nvmf_get_transports", 00:10:54.145 "params": { 00:10:54.145 "trtype": "tcp" 00:10:54.145 } 00:10:54.145 } 00:10:54.145 Got JSON-RPC error response 00:10:54.145 GoRPCClient: error on JSON-RPC call 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:54.145 [2024-12-06 13:05:00.565195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.145 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:54.404 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.404 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:54.404 { 00:10:54.404 "subsystems": [ 00:10:54.404 { 00:10:54.404 "subsystem": "fsdev", 00:10:54.404 "config": [ 00:10:54.404 { 00:10:54.404 "method": "fsdev_set_opts", 00:10:54.404 "params": { 00:10:54.404 "fsdev_io_cache_size": 256, 00:10:54.404 "fsdev_io_pool_size": 65535 00:10:54.404 } 00:10:54.404 } 00:10:54.404 ] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "keyring", 00:10:54.404 "config": [] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "iobuf", 00:10:54.404 "config": [ 00:10:54.404 { 00:10:54.404 "method": "iobuf_set_options", 00:10:54.404 "params": { 00:10:54.404 "enable_numa": false, 00:10:54.404 "large_bufsize": 135168, 00:10:54.404 "large_pool_count": 1024, 00:10:54.404 "small_bufsize": 8192, 00:10:54.404 "small_pool_count": 8192 00:10:54.404 } 00:10:54.404 } 00:10:54.404 ] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "sock", 00:10:54.404 "config": [ 00:10:54.404 { 00:10:54.404 "method": "sock_set_default_impl", 00:10:54.404 "params": { 00:10:54.404 "impl_name": "posix" 00:10:54.404 } 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "method": "sock_impl_set_options", 00:10:54.404 "params": { 00:10:54.404 "enable_ktls": false, 00:10:54.404 "enable_placement_id": 0, 00:10:54.404 "enable_quickack": false, 00:10:54.404 "enable_recv_pipe": true, 00:10:54.404 "enable_zerocopy_send_client": false, 00:10:54.404 "enable_zerocopy_send_server": true, 00:10:54.404 "impl_name": "ssl", 00:10:54.404 "recv_buf_size": 4096, 00:10:54.404 "send_buf_size": 4096, 00:10:54.404 "tls_version": 0, 00:10:54.404 "zerocopy_threshold": 0 00:10:54.404 } 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "method": "sock_impl_set_options", 00:10:54.404 "params": { 00:10:54.404 "enable_ktls": false, 00:10:54.404 "enable_placement_id": 0, 00:10:54.404 "enable_quickack": false, 00:10:54.404 "enable_recv_pipe": true, 00:10:54.404 "enable_zerocopy_send_client": false, 00:10:54.404 "enable_zerocopy_send_server": true, 00:10:54.404 "impl_name": "posix", 00:10:54.404 "recv_buf_size": 2097152, 00:10:54.404 "send_buf_size": 2097152, 00:10:54.404 "tls_version": 0, 00:10:54.404 "zerocopy_threshold": 0 00:10:54.404 } 00:10:54.404 } 00:10:54.404 ] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "vmd", 00:10:54.404 "config": [] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "accel", 00:10:54.404 "config": [ 00:10:54.404 { 00:10:54.404 "method": "accel_set_options", 00:10:54.404 "params": { 00:10:54.404 "buf_count": 2048, 00:10:54.404 "large_cache_size": 16, 00:10:54.404 "sequence_count": 2048, 00:10:54.404 "small_cache_size": 128, 00:10:54.404 "task_count": 2048 00:10:54.404 } 00:10:54.404 } 00:10:54.404 ] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "bdev", 00:10:54.404 "config": [ 00:10:54.404 { 00:10:54.404 "method": "bdev_set_options", 00:10:54.404 "params": { 00:10:54.404 "bdev_auto_examine": true, 00:10:54.404 "bdev_io_cache_size": 256, 00:10:54.404 "bdev_io_pool_size": 65535, 00:10:54.404 "iobuf_large_cache_size": 16, 00:10:54.404 "iobuf_small_cache_size": 128 00:10:54.404 } 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "method": "bdev_raid_set_options", 00:10:54.404 "params": { 00:10:54.404 "process_max_bandwidth_mb_sec": 0, 00:10:54.404 "process_window_size_kb": 1024 00:10:54.404 } 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "method": "bdev_iscsi_set_options", 00:10:54.404 "params": { 00:10:54.404 "timeout_sec": 30 00:10:54.404 } 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "method": "bdev_nvme_set_options", 00:10:54.404 "params": { 00:10:54.404 "action_on_timeout": "none", 00:10:54.404 "allow_accel_sequence": false, 00:10:54.404 "arbitration_burst": 0, 00:10:54.404 "bdev_retry_count": 3, 00:10:54.404 "ctrlr_loss_timeout_sec": 0, 00:10:54.404 "delay_cmd_submit": true, 00:10:54.404 "dhchap_dhgroups": [ 00:10:54.404 "null", 00:10:54.404 "ffdhe2048", 00:10:54.404 "ffdhe3072", 00:10:54.404 "ffdhe4096", 00:10:54.404 "ffdhe6144", 00:10:54.404 "ffdhe8192" 00:10:54.404 ], 00:10:54.404 "dhchap_digests": [ 00:10:54.404 "sha256", 00:10:54.404 "sha384", 00:10:54.404 "sha512" 00:10:54.404 ], 00:10:54.404 "disable_auto_failback": false, 00:10:54.404 "fast_io_fail_timeout_sec": 0, 00:10:54.404 "generate_uuids": false, 00:10:54.404 "high_priority_weight": 0, 00:10:54.404 "io_path_stat": false, 00:10:54.404 "io_queue_requests": 0, 00:10:54.404 "keep_alive_timeout_ms": 10000, 00:10:54.404 "low_priority_weight": 0, 00:10:54.404 "medium_priority_weight": 0, 00:10:54.404 "nvme_adminq_poll_period_us": 10000, 00:10:54.404 "nvme_error_stat": false, 00:10:54.404 "nvme_ioq_poll_period_us": 0, 00:10:54.404 "rdma_cm_event_timeout_ms": 0, 00:10:54.404 "rdma_max_cq_size": 0, 00:10:54.404 "rdma_srq_size": 0, 00:10:54.404 "reconnect_delay_sec": 0, 00:10:54.404 "timeout_admin_us": 0, 00:10:54.404 "timeout_us": 0, 00:10:54.404 "transport_ack_timeout": 0, 00:10:54.404 "transport_retry_count": 4, 00:10:54.404 "transport_tos": 0 00:10:54.404 } 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "method": "bdev_nvme_set_hotplug", 00:10:54.404 "params": { 00:10:54.404 "enable": false, 00:10:54.404 "period_us": 100000 00:10:54.404 } 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "method": "bdev_wait_for_examine" 00:10:54.404 } 00:10:54.404 ] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "scsi", 00:10:54.404 "config": null 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "scheduler", 00:10:54.404 "config": [ 00:10:54.404 { 00:10:54.404 "method": "framework_set_scheduler", 00:10:54.404 "params": { 00:10:54.404 "name": "static" 00:10:54.404 } 00:10:54.404 } 00:10:54.404 ] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "vhost_scsi", 00:10:54.404 "config": [] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "vhost_blk", 00:10:54.404 "config": [] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "ublk", 00:10:54.404 "config": [] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "nbd", 00:10:54.404 "config": [] 00:10:54.404 }, 00:10:54.404 { 00:10:54.404 "subsystem": "nvmf", 00:10:54.404 "config": [ 00:10:54.404 { 00:10:54.404 "method": "nvmf_set_config", 00:10:54.404 "params": { 00:10:54.404 "admin_cmd_passthru": { 00:10:54.405 "identify_ctrlr": false 00:10:54.405 }, 00:10:54.405 "dhchap_dhgroups": [ 00:10:54.405 "null", 00:10:54.405 "ffdhe2048", 00:10:54.405 "ffdhe3072", 00:10:54.405 "ffdhe4096", 00:10:54.405 "ffdhe6144", 00:10:54.405 "ffdhe8192" 00:10:54.405 ], 00:10:54.405 "dhchap_digests": [ 00:10:54.405 "sha256", 00:10:54.405 "sha384", 00:10:54.405 "sha512" 00:10:54.405 ], 00:10:54.405 "discovery_filter": "match_any" 00:10:54.405 } 00:10:54.405 }, 00:10:54.405 { 00:10:54.405 "method": "nvmf_set_max_subsystems", 00:10:54.405 "params": { 00:10:54.405 "max_subsystems": 1024 00:10:54.405 } 00:10:54.405 }, 00:10:54.405 { 00:10:54.405 "method": "nvmf_set_crdt", 00:10:54.405 "params": { 00:10:54.405 "crdt1": 0, 00:10:54.405 "crdt2": 0, 00:10:54.405 "crdt3": 0 00:10:54.405 } 00:10:54.405 }, 00:10:54.405 { 00:10:54.405 "method": "nvmf_create_transport", 00:10:54.405 "params": { 00:10:54.405 "abort_timeout_sec": 1, 00:10:54.405 "ack_timeout": 0, 00:10:54.405 "buf_cache_size": 4294967295, 00:10:54.405 "c2h_success": true, 00:10:54.405 "data_wr_pool_size": 0, 00:10:54.405 "dif_insert_or_strip": false, 00:10:54.405 "in_capsule_data_size": 4096, 00:10:54.405 "io_unit_size": 131072, 00:10:54.405 "max_aq_depth": 128, 00:10:54.405 "max_io_qpairs_per_ctrlr": 127, 00:10:54.405 "max_io_size": 131072, 00:10:54.405 "max_queue_depth": 128, 00:10:54.405 "num_shared_buffers": 511, 00:10:54.405 "sock_priority": 0, 00:10:54.405 "trtype": "TCP", 00:10:54.405 "zcopy": false 00:10:54.405 } 00:10:54.405 } 00:10:54.405 ] 00:10:54.405 }, 00:10:54.405 { 00:10:54.405 "subsystem": "iscsi", 00:10:54.405 "config": [ 00:10:54.405 { 00:10:54.405 "method": "iscsi_set_options", 00:10:54.405 "params": { 00:10:54.405 "allow_duplicated_isid": false, 00:10:54.405 "chap_group": 0, 00:10:54.405 "data_out_pool_size": 2048, 00:10:54.405 "default_time2retain": 20, 00:10:54.405 "default_time2wait": 2, 00:10:54.405 "disable_chap": false, 00:10:54.405 "error_recovery_level": 0, 00:10:54.405 "first_burst_length": 8192, 00:10:54.405 "immediate_data": true, 00:10:54.405 "immediate_data_pool_size": 16384, 00:10:54.405 "max_connections_per_session": 2, 00:10:54.405 "max_large_datain_per_connection": 64, 00:10:54.405 "max_queue_depth": 64, 00:10:54.405 "max_r2t_per_connection": 4, 00:10:54.405 "max_sessions": 128, 00:10:54.405 "mutual_chap": false, 00:10:54.405 "node_base": "iqn.2016-06.io.spdk", 00:10:54.405 "nop_in_interval": 30, 00:10:54.405 "nop_timeout": 60, 00:10:54.405 "pdu_pool_size": 36864, 00:10:54.405 "require_chap": false 00:10:54.405 } 00:10:54.405 } 00:10:54.405 ] 00:10:54.405 } 00:10:54.405 ] 00:10:54.405 } 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59402 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59402 ']' 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59402 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59402 00:10:54.405 killing process with pid 59402 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59402' 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59402 00:10:54.405 13:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59402 00:10:54.663 13:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59427 00:10:54.663 13:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:54.663 13:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59427 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59427 ']' 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59427 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59427 00:10:59.929 killing process with pid 59427 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59427' 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59427 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59427 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:59.929 ************************************ 00:10:59.929 END TEST skip_rpc_with_json 00:10:59.929 ************************************ 00:10:59.929 00:10:59.929 real 0m6.215s 00:10:59.929 user 0m5.906s 00:10:59.929 sys 0m0.470s 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:59.929 13:05:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:59.929 13:05:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.929 13:05:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.929 13:05:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.929 ************************************ 00:10:59.929 START TEST skip_rpc_with_delay 00:10:59.929 ************************************ 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:59.929 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:59.930 [2024-12-06 13:05:06.451730] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:00.187 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:00.187 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:00.187 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:00.187 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:00.187 00:11:00.187 real 0m0.112s 00:11:00.187 user 0m0.079s 00:11:00.187 sys 0m0.031s 00:11:00.187 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.187 ************************************ 00:11:00.188 END TEST skip_rpc_with_delay 00:11:00.188 ************************************ 00:11:00.188 13:05:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:00.188 13:05:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:00.188 13:05:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:00.188 13:05:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:00.188 13:05:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.188 13:05:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.188 13:05:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.188 ************************************ 00:11:00.188 START TEST exit_on_failed_rpc_init 00:11:00.188 ************************************ 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59537 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59537 00:11:00.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59537 ']' 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.188 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:00.188 [2024-12-06 13:05:06.594804] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:00.188 [2024-12-06 13:05:06.595146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59537 ] 00:11:00.446 [2024-12-06 13:05:06.739732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.446 [2024-12-06 13:05:06.778055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.703 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.703 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:00.703 13:05:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:00.703 13:05:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:00.704 13:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:00.704 [2024-12-06 13:05:07.032070] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:00.704 [2024-12-06 13:05:07.032178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:11:00.704 [2024-12-06 13:05:07.171978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.704 [2024-12-06 13:05:07.211917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.704 [2024-12-06 13:05:07.212211] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:00.704 [2024-12-06 13:05:07.212351] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:00.704 [2024-12-06 13:05:07.212560] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59537 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59537 ']' 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59537 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59537 00:11:00.962 killing process with pid 59537 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59537' 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59537 00:11:00.962 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59537 00:11:01.219 00:11:01.219 real 0m1.028s 00:11:01.219 user 0m1.194s 00:11:01.219 sys 0m0.282s 00:11:01.219 ************************************ 00:11:01.219 END TEST exit_on_failed_rpc_init 00:11:01.219 ************************************ 00:11:01.219 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.219 13:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:01.219 13:05:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:01.219 00:11:01.219 real 0m13.059s 00:11:01.219 user 0m12.401s 00:11:01.219 sys 0m1.176s 00:11:01.219 13:05:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.219 13:05:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.219 ************************************ 00:11:01.219 END TEST skip_rpc 00:11:01.219 ************************************ 00:11:01.219 13:05:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:01.219 13:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.219 13:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.219 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:11:01.219 ************************************ 00:11:01.219 START TEST rpc_client 00:11:01.219 ************************************ 00:11:01.219 13:05:07 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:01.219 * Looking for test storage... 00:11:01.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:01.219 13:05:07 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.219 13:05:07 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.219 13:05:07 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.478 13:05:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.478 --rc genhtml_branch_coverage=1 00:11:01.478 --rc genhtml_function_coverage=1 00:11:01.478 --rc genhtml_legend=1 00:11:01.478 --rc geninfo_all_blocks=1 00:11:01.478 --rc geninfo_unexecuted_blocks=1 00:11:01.478 00:11:01.478 ' 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.478 --rc genhtml_branch_coverage=1 00:11:01.478 --rc genhtml_function_coverage=1 00:11:01.478 --rc genhtml_legend=1 00:11:01.478 --rc geninfo_all_blocks=1 00:11:01.478 --rc geninfo_unexecuted_blocks=1 00:11:01.478 00:11:01.478 ' 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.478 --rc genhtml_branch_coverage=1 00:11:01.478 --rc genhtml_function_coverage=1 00:11:01.478 --rc genhtml_legend=1 00:11:01.478 --rc geninfo_all_blocks=1 00:11:01.478 --rc geninfo_unexecuted_blocks=1 00:11:01.478 00:11:01.478 ' 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.478 --rc genhtml_branch_coverage=1 00:11:01.478 --rc genhtml_function_coverage=1 00:11:01.478 --rc genhtml_legend=1 00:11:01.478 --rc geninfo_all_blocks=1 00:11:01.478 --rc geninfo_unexecuted_blocks=1 00:11:01.478 00:11:01.478 ' 00:11:01.478 13:05:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:01.478 OK 00:11:01.478 13:05:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:01.478 00:11:01.478 real 0m0.217s 00:11:01.478 user 0m0.159s 00:11:01.478 sys 0m0.065s 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.478 13:05:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:01.478 ************************************ 00:11:01.478 END TEST rpc_client 00:11:01.478 ************************************ 00:11:01.478 13:05:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:01.478 13:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.478 13:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.478 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:11:01.478 ************************************ 00:11:01.478 START TEST json_config 00:11:01.478 ************************************ 00:11:01.478 13:05:07 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:01.478 13:05:07 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.478 13:05:07 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.478 13:05:07 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.737 13:05:08 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.737 13:05:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.737 13:05:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.737 13:05:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.737 13:05:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.737 13:05:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.737 13:05:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.737 13:05:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.737 13:05:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:01.737 13:05:08 json_config -- scripts/common.sh@345 -- # : 1 00:11:01.737 13:05:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.737 13:05:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.737 13:05:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:01.737 13:05:08 json_config -- scripts/common.sh@353 -- # local d=1 00:11:01.737 13:05:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.737 13:05:08 json_config -- scripts/common.sh@355 -- # echo 1 00:11:01.737 13:05:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.737 13:05:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@353 -- # local d=2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.737 13:05:08 json_config -- scripts/common.sh@355 -- # echo 2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.737 13:05:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.737 13:05:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.737 13:05:08 json_config -- scripts/common.sh@368 -- # return 0 00:11:01.737 13:05:08 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.737 13:05:08 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.737 --rc genhtml_branch_coverage=1 00:11:01.737 --rc genhtml_function_coverage=1 00:11:01.737 --rc genhtml_legend=1 00:11:01.737 --rc geninfo_all_blocks=1 00:11:01.737 --rc geninfo_unexecuted_blocks=1 00:11:01.737 00:11:01.737 ' 00:11:01.737 13:05:08 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.737 --rc genhtml_branch_coverage=1 00:11:01.737 --rc genhtml_function_coverage=1 00:11:01.737 --rc genhtml_legend=1 00:11:01.737 --rc geninfo_all_blocks=1 00:11:01.737 --rc geninfo_unexecuted_blocks=1 00:11:01.737 00:11:01.737 ' 00:11:01.737 13:05:08 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.737 --rc genhtml_branch_coverage=1 00:11:01.737 --rc genhtml_function_coverage=1 00:11:01.737 --rc genhtml_legend=1 00:11:01.737 --rc geninfo_all_blocks=1 00:11:01.737 --rc geninfo_unexecuted_blocks=1 00:11:01.737 00:11:01.737 ' 00:11:01.737 13:05:08 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.737 --rc genhtml_branch_coverage=1 00:11:01.737 --rc genhtml_function_coverage=1 00:11:01.737 --rc genhtml_legend=1 00:11:01.737 --rc geninfo_all_blocks=1 00:11:01.737 --rc geninfo_unexecuted_blocks=1 00:11:01.737 00:11:01.737 ' 00:11:01.737 13:05:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.737 13:05:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:01.737 13:05:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.737 13:05:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.737 13:05:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.737 13:05:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.737 13:05:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.737 13:05:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.738 13:05:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.738 13:05:08 json_config -- paths/export.sh@5 -- # export PATH 00:11:01.738 13:05:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@51 -- # : 0 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.738 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.738 13:05:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:11:01.738 INFO: JSON configuration test init 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.738 Waiting for target to run... 00:11:01.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:01.738 13:05:08 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:11:01.738 13:05:08 json_config -- json_config/common.sh@9 -- # local app=target 00:11:01.738 13:05:08 json_config -- json_config/common.sh@10 -- # shift 00:11:01.738 13:05:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:01.738 13:05:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:01.738 13:05:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:01.738 13:05:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:01.738 13:05:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:01.738 13:05:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59687 00:11:01.738 13:05:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:01.738 13:05:08 json_config -- json_config/common.sh@25 -- # waitforlisten 59687 /var/tmp/spdk_tgt.sock 00:11:01.738 13:05:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 59687 ']' 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.738 13:05:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.738 [2024-12-06 13:05:08.179247] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:01.738 [2024-12-06 13:05:08.179580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59687 ] 00:11:01.996 [2024-12-06 13:05:08.499322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.268 [2024-12-06 13:05:08.526140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.834 13:05:09 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.834 13:05:09 json_config -- common/autotest_common.sh@868 -- # return 0 00:11:02.834 13:05:09 json_config -- json_config/common.sh@26 -- # echo '' 00:11:02.834 00:11:02.834 13:05:09 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:11:02.834 13:05:09 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:11:02.834 13:05:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.834 13:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:02.834 13:05:09 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:11:02.834 13:05:09 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:11:02.834 13:05:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.834 13:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:02.834 13:05:09 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:02.834 13:05:09 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:11:02.834 13:05:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:03.397 13:05:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.397 13:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:11:03.397 13:05:09 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:11:03.397 13:05:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@51 -- # local get_types 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@54 -- # sort 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:11:03.961 13:05:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.961 13:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@62 -- # return 0 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:11:03.961 13:05:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.961 13:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:11:03.961 13:05:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:03.961 13:05:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:04.219 MallocForNvmf0 00:11:04.219 13:05:10 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:04.219 13:05:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:04.477 MallocForNvmf1 00:11:04.478 13:05:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:04.478 13:05:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:04.736 [2024-12-06 13:05:11.096020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.736 13:05:11 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.736 13:05:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.997 13:05:11 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:04.997 13:05:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:05.254 13:05:11 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:05.254 13:05:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:05.512 13:05:11 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:05.512 13:05:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:05.813 [2024-12-06 13:05:12.248775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:05.813 13:05:12 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:11:05.813 13:05:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.813 13:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:05.813 13:05:12 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:11:05.813 13:05:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.813 13:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:06.095 13:05:12 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:11:06.095 13:05:12 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:06.095 13:05:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:06.353 MallocBdevForConfigChangeCheck 00:11:06.353 13:05:12 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:11:06.353 13:05:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.353 13:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:06.353 13:05:12 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:11:06.353 13:05:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:06.919 INFO: shutting down applications... 00:11:06.919 13:05:13 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:11:06.919 13:05:13 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:11:06.919 13:05:13 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:11:06.919 13:05:13 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:11:06.919 13:05:13 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:07.178 Calling clear_iscsi_subsystem 00:11:07.178 Calling clear_nvmf_subsystem 00:11:07.178 Calling clear_nbd_subsystem 00:11:07.178 Calling clear_ublk_subsystem 00:11:07.178 Calling clear_vhost_blk_subsystem 00:11:07.178 Calling clear_vhost_scsi_subsystem 00:11:07.178 Calling clear_bdev_subsystem 00:11:07.178 13:05:13 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:07.178 13:05:13 json_config -- json_config/json_config.sh@350 -- # count=100 00:11:07.178 13:05:13 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:11:07.178 13:05:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:07.178 13:05:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:07.178 13:05:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:07.437 13:05:13 json_config -- json_config/json_config.sh@352 -- # break 00:11:07.437 13:05:13 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:11:07.437 13:05:13 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:11:07.437 13:05:13 json_config -- json_config/common.sh@31 -- # local app=target 00:11:07.437 13:05:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:07.437 13:05:13 json_config -- json_config/common.sh@35 -- # [[ -n 59687 ]] 00:11:07.437 13:05:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59687 00:11:07.437 13:05:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:07.437 13:05:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:07.437 13:05:13 json_config -- json_config/common.sh@41 -- # kill -0 59687 00:11:07.437 13:05:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:08.003 13:05:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:08.003 13:05:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:08.003 13:05:14 json_config -- json_config/common.sh@41 -- # kill -0 59687 00:11:08.003 13:05:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:08.003 SPDK target shutdown done 00:11:08.003 INFO: relaunching applications... 00:11:08.003 13:05:14 json_config -- json_config/common.sh@43 -- # break 00:11:08.003 13:05:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:08.003 13:05:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:08.003 13:05:14 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:11:08.003 13:05:14 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:08.003 13:05:14 json_config -- json_config/common.sh@9 -- # local app=target 00:11:08.003 13:05:14 json_config -- json_config/common.sh@10 -- # shift 00:11:08.003 13:05:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:08.003 13:05:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:08.003 13:05:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:08.003 13:05:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:08.003 13:05:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:08.003 Waiting for target to run... 00:11:08.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:08.003 13:05:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59978 00:11:08.003 13:05:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:08.003 13:05:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:08.003 13:05:14 json_config -- json_config/common.sh@25 -- # waitforlisten 59978 /var/tmp/spdk_tgt.sock 00:11:08.004 13:05:14 json_config -- common/autotest_common.sh@835 -- # '[' -z 59978 ']' 00:11:08.004 13:05:14 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:08.004 13:05:14 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.004 13:05:14 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:08.004 13:05:14 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.004 13:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:08.004 [2024-12-06 13:05:14.509353] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:08.004 [2024-12-06 13:05:14.509665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59978 ] 00:11:08.571 [2024-12-06 13:05:14.794424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.571 [2024-12-06 13:05:14.822723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.828 [2024-12-06 13:05:15.149001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.828 [2024-12-06 13:05:15.181071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:09.085 13:05:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.085 00:11:09.085 INFO: Checking if target configuration is the same... 00:11:09.085 13:05:15 json_config -- common/autotest_common.sh@868 -- # return 0 00:11:09.085 13:05:15 json_config -- json_config/common.sh@26 -- # echo '' 00:11:09.085 13:05:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:11:09.085 13:05:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:09.085 13:05:15 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:09.085 13:05:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:11:09.085 13:05:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:09.085 + '[' 2 -ne 2 ']' 00:11:09.085 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:09.085 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:09.085 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:09.085 +++ basename /dev/fd/62 00:11:09.085 ++ mktemp /tmp/62.XXX 00:11:09.085 + tmp_file_1=/tmp/62.h7L 00:11:09.085 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:09.085 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:09.085 + tmp_file_2=/tmp/spdk_tgt_config.json.dBm 00:11:09.085 + ret=0 00:11:09.085 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:09.651 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:09.651 + diff -u /tmp/62.h7L /tmp/spdk_tgt_config.json.dBm 00:11:09.651 INFO: JSON config files are the same 00:11:09.651 + echo 'INFO: JSON config files are the same' 00:11:09.651 + rm /tmp/62.h7L /tmp/spdk_tgt_config.json.dBm 00:11:09.651 + exit 0 00:11:09.651 INFO: changing configuration and checking if this can be detected... 00:11:09.651 13:05:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:11:09.651 13:05:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:09.651 13:05:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:09.651 13:05:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:09.909 13:05:16 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:09.909 13:05:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:11:09.909 13:05:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:09.909 + '[' 2 -ne 2 ']' 00:11:09.909 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:09.909 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:09.909 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:09.909 +++ basename /dev/fd/62 00:11:09.909 ++ mktemp /tmp/62.XXX 00:11:09.909 + tmp_file_1=/tmp/62.7J8 00:11:09.909 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:09.909 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:09.909 + tmp_file_2=/tmp/spdk_tgt_config.json.Jl6 00:11:09.909 + ret=0 00:11:09.909 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:10.476 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:10.476 + diff -u /tmp/62.7J8 /tmp/spdk_tgt_config.json.Jl6 00:11:10.476 + ret=1 00:11:10.476 + echo '=== Start of file: /tmp/62.7J8 ===' 00:11:10.476 + cat /tmp/62.7J8 00:11:10.476 + echo '=== End of file: /tmp/62.7J8 ===' 00:11:10.476 + echo '' 00:11:10.476 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Jl6 ===' 00:11:10.476 + cat /tmp/spdk_tgt_config.json.Jl6 00:11:10.476 + echo '=== End of file: /tmp/spdk_tgt_config.json.Jl6 ===' 00:11:10.476 + echo '' 00:11:10.476 + rm /tmp/62.7J8 /tmp/spdk_tgt_config.json.Jl6 00:11:10.476 + exit 1 00:11:10.476 INFO: configuration change detected. 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 59978 ]] 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.476 13:05:16 json_config -- json_config/json_config.sh@330 -- # killprocess 59978 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@954 -- # '[' -z 59978 ']' 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@958 -- # kill -0 59978 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@959 -- # uname 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59978 00:11:10.476 killing process with pid 59978 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59978' 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@973 -- # kill 59978 00:11:10.476 13:05:16 json_config -- common/autotest_common.sh@978 -- # wait 59978 00:11:10.734 13:05:17 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:10.734 13:05:17 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:11:10.734 13:05:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.734 13:05:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.734 INFO: Success 00:11:10.734 13:05:17 json_config -- json_config/json_config.sh@335 -- # return 0 00:11:10.734 13:05:17 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:11:10.734 00:11:10.734 real 0m9.276s 00:11:10.734 user 0m13.898s 00:11:10.734 sys 0m1.561s 00:11:10.734 13:05:17 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.734 ************************************ 00:11:10.734 END TEST json_config 00:11:10.734 ************************************ 00:11:10.734 13:05:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.734 13:05:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:10.734 13:05:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.734 13:05:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.734 13:05:17 -- common/autotest_common.sh@10 -- # set +x 00:11:10.734 ************************************ 00:11:10.734 START TEST json_config_extra_key 00:11:10.734 ************************************ 00:11:10.734 13:05:17 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.993 --rc genhtml_branch_coverage=1 00:11:10.993 --rc genhtml_function_coverage=1 00:11:10.993 --rc genhtml_legend=1 00:11:10.993 --rc geninfo_all_blocks=1 00:11:10.993 --rc geninfo_unexecuted_blocks=1 00:11:10.993 00:11:10.993 ' 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.993 --rc genhtml_branch_coverage=1 00:11:10.993 --rc genhtml_function_coverage=1 00:11:10.993 --rc genhtml_legend=1 00:11:10.993 --rc geninfo_all_blocks=1 00:11:10.993 --rc geninfo_unexecuted_blocks=1 00:11:10.993 00:11:10.993 ' 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.993 --rc genhtml_branch_coverage=1 00:11:10.993 --rc genhtml_function_coverage=1 00:11:10.993 --rc genhtml_legend=1 00:11:10.993 --rc geninfo_all_blocks=1 00:11:10.993 --rc geninfo_unexecuted_blocks=1 00:11:10.993 00:11:10.993 ' 00:11:10.993 13:05:17 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.993 --rc genhtml_branch_coverage=1 00:11:10.993 --rc genhtml_function_coverage=1 00:11:10.993 --rc genhtml_legend=1 00:11:10.993 --rc geninfo_all_blocks=1 00:11:10.993 --rc geninfo_unexecuted_blocks=1 00:11:10.993 00:11:10.993 ' 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.993 13:05:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.993 13:05:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.993 13:05:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.993 13:05:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.993 13:05:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:10.993 13:05:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.993 13:05:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:10.993 INFO: launching applications... 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:10.993 13:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:10.993 13:05:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:10.993 13:05:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60156 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:10.994 Waiting for target to run... 00:11:10.994 13:05:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60156 /var/tmp/spdk_tgt.sock 00:11:10.994 13:05:17 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 60156 ']' 00:11:10.994 13:05:17 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:10.994 13:05:17 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.994 13:05:17 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:10.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:10.994 13:05:17 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.994 13:05:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:10.994 [2024-12-06 13:05:17.496924] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:10.994 [2024-12-06 13:05:17.497319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60156 ] 00:11:11.559 [2024-12-06 13:05:17.801420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.559 [2024-12-06 13:05:17.827239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.125 13:05:18 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.125 13:05:18 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:12.125 00:11:12.125 INFO: shutting down applications... 00:11:12.125 13:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:12.125 13:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60156 ]] 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60156 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60156 00:11:12.125 13:05:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:12.691 13:05:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:12.691 13:05:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:12.691 13:05:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60156 00:11:12.691 13:05:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:12.691 13:05:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:12.691 13:05:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:12.691 13:05:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:12.691 SPDK target shutdown done 00:11:12.691 13:05:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:12.691 Success 00:11:12.691 00:11:12.691 real 0m1.887s 00:11:12.691 user 0m1.851s 00:11:12.691 sys 0m0.339s 00:11:12.691 13:05:19 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.691 13:05:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:12.691 ************************************ 00:11:12.691 END TEST json_config_extra_key 00:11:12.691 ************************************ 00:11:12.691 13:05:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:12.691 13:05:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.691 13:05:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.691 13:05:19 -- common/autotest_common.sh@10 -- # set +x 00:11:12.691 ************************************ 00:11:12.691 START TEST alias_rpc 00:11:12.691 ************************************ 00:11:12.691 13:05:19 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:12.949 * Looking for test storage... 00:11:12.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:12.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.949 13:05:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.949 --rc genhtml_branch_coverage=1 00:11:12.949 --rc genhtml_function_coverage=1 00:11:12.949 --rc genhtml_legend=1 00:11:12.949 --rc geninfo_all_blocks=1 00:11:12.949 --rc geninfo_unexecuted_blocks=1 00:11:12.949 00:11:12.949 ' 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.949 --rc genhtml_branch_coverage=1 00:11:12.949 --rc genhtml_function_coverage=1 00:11:12.949 --rc genhtml_legend=1 00:11:12.949 --rc geninfo_all_blocks=1 00:11:12.949 --rc geninfo_unexecuted_blocks=1 00:11:12.949 00:11:12.949 ' 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.949 --rc genhtml_branch_coverage=1 00:11:12.949 --rc genhtml_function_coverage=1 00:11:12.949 --rc genhtml_legend=1 00:11:12.949 --rc geninfo_all_blocks=1 00:11:12.949 --rc geninfo_unexecuted_blocks=1 00:11:12.949 00:11:12.949 ' 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.949 --rc genhtml_branch_coverage=1 00:11:12.949 --rc genhtml_function_coverage=1 00:11:12.949 --rc genhtml_legend=1 00:11:12.949 --rc geninfo_all_blocks=1 00:11:12.949 --rc geninfo_unexecuted_blocks=1 00:11:12.949 00:11:12.949 ' 00:11:12.949 13:05:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:12.949 13:05:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60241 00:11:12.949 13:05:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:12.949 13:05:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60241 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 60241 ']' 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.949 13:05:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.949 [2024-12-06 13:05:19.385824] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:12.949 [2024-12-06 13:05:19.386133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:11:13.207 [2024-12-06 13:05:19.535301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.207 [2024-12-06 13:05:19.576156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.464 13:05:19 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.464 13:05:19 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:13.464 13:05:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:13.722 13:05:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60241 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 60241 ']' 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 60241 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60241 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.722 killing process with pid 60241 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60241' 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 60241 00:11:13.722 13:05:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 60241 00:11:13.980 ************************************ 00:11:13.980 END TEST alias_rpc 00:11:13.980 ************************************ 00:11:13.980 00:11:13.980 real 0m1.285s 00:11:13.980 user 0m1.482s 00:11:13.980 sys 0m0.377s 00:11:13.980 13:05:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.980 13:05:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.980 13:05:20 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:11:13.980 13:05:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:13.980 13:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.980 13:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.980 13:05:20 -- common/autotest_common.sh@10 -- # set +x 00:11:13.980 ************************************ 00:11:13.980 START TEST dpdk_mem_utility 00:11:13.980 ************************************ 00:11:13.980 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:14.238 * Looking for test storage... 00:11:14.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.238 13:05:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.238 --rc genhtml_branch_coverage=1 00:11:14.238 --rc genhtml_function_coverage=1 00:11:14.238 --rc genhtml_legend=1 00:11:14.238 --rc geninfo_all_blocks=1 00:11:14.238 --rc geninfo_unexecuted_blocks=1 00:11:14.238 00:11:14.238 ' 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.238 --rc genhtml_branch_coverage=1 00:11:14.238 --rc genhtml_function_coverage=1 00:11:14.238 --rc genhtml_legend=1 00:11:14.238 --rc geninfo_all_blocks=1 00:11:14.238 --rc geninfo_unexecuted_blocks=1 00:11:14.238 00:11:14.238 ' 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.238 --rc genhtml_branch_coverage=1 00:11:14.238 --rc genhtml_function_coverage=1 00:11:14.238 --rc genhtml_legend=1 00:11:14.238 --rc geninfo_all_blocks=1 00:11:14.238 --rc geninfo_unexecuted_blocks=1 00:11:14.238 00:11:14.238 ' 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.238 --rc genhtml_branch_coverage=1 00:11:14.238 --rc genhtml_function_coverage=1 00:11:14.238 --rc genhtml_legend=1 00:11:14.238 --rc geninfo_all_blocks=1 00:11:14.238 --rc geninfo_unexecuted_blocks=1 00:11:14.238 00:11:14.238 ' 00:11:14.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.238 13:05:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:14.238 13:05:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60333 00:11:14.238 13:05:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:14.238 13:05:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60333 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60333 ']' 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.238 13:05:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:14.238 [2024-12-06 13:05:20.729359] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:14.238 [2024-12-06 13:05:20.729765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60333 ] 00:11:14.496 [2024-12-06 13:05:20.880621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.496 [2024-12-06 13:05:20.914466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.753 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.753 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:11:14.753 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:14.753 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:14.753 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.753 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:14.753 { 00:11:14.753 "filename": "/tmp/spdk_mem_dump.txt" 00:11:14.753 } 00:11:14.753 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.753 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:14.753 DPDK memory size 818.000000 MiB in 1 heap(s) 00:11:14.753 1 heaps totaling size 818.000000 MiB 00:11:14.753 size: 818.000000 MiB heap id: 0 00:11:14.753 end heaps---------- 00:11:14.753 9 mempools totaling size 603.782043 MiB 00:11:14.753 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:14.753 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:14.753 size: 100.555481 MiB name: bdev_io_60333 00:11:14.753 size: 50.003479 MiB name: msgpool_60333 00:11:14.753 size: 36.509338 MiB name: fsdev_io_60333 00:11:14.753 size: 21.763794 MiB name: PDU_Pool 00:11:14.753 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:14.753 size: 4.133484 MiB name: evtpool_60333 00:11:14.753 size: 0.026123 MiB name: Session_Pool 00:11:14.753 end mempools------- 00:11:14.753 6 memzones totaling size 4.142822 MiB 00:11:14.753 size: 1.000366 MiB name: RG_ring_0_60333 00:11:14.753 size: 1.000366 MiB name: RG_ring_1_60333 00:11:14.753 size: 1.000366 MiB name: RG_ring_4_60333 00:11:14.753 size: 1.000366 MiB name: RG_ring_5_60333 00:11:14.753 size: 0.125366 MiB name: RG_ring_2_60333 00:11:14.753 size: 0.015991 MiB name: RG_ring_3_60333 00:11:14.753 end memzones------- 00:11:14.753 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:15.013 heap id: 0 total size: 818.000000 MiB number of busy elements: 228 number of free elements: 15 00:11:15.013 list of free elements. size: 10.818787 MiB 00:11:15.013 element at address: 0x200019200000 with size: 0.999878 MiB 00:11:15.013 element at address: 0x200019400000 with size: 0.999878 MiB 00:11:15.013 element at address: 0x200000400000 with size: 0.996155 MiB 00:11:15.013 element at address: 0x200032000000 with size: 0.994446 MiB 00:11:15.013 element at address: 0x200006400000 with size: 0.959839 MiB 00:11:15.013 element at address: 0x200012c00000 with size: 0.944275 MiB 00:11:15.013 element at address: 0x200019600000 with size: 0.936584 MiB 00:11:15.013 element at address: 0x200000200000 with size: 0.717346 MiB 00:11:15.013 element at address: 0x20001ae00000 with size: 0.573364 MiB 00:11:15.013 element at address: 0x200000c00000 with size: 0.490662 MiB 00:11:15.013 element at address: 0x20000a600000 with size: 0.489807 MiB 00:11:15.013 element at address: 0x200019800000 with size: 0.485657 MiB 00:11:15.013 element at address: 0x200003e00000 with size: 0.481201 MiB 00:11:15.013 element at address: 0x200028200000 with size: 0.396301 MiB 00:11:15.013 element at address: 0x200000800000 with size: 0.353394 MiB 00:11:15.013 list of standard malloc elements. size: 199.252319 MiB 00:11:15.013 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:11:15.013 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:11:15.013 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:15.013 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:11:15.013 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:11:15.013 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:15.013 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:11:15.013 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:15.013 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:11:15.013 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000085a780 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000085a980 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f080 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f140 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f200 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f380 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f440 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f500 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000087f680 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000cff000 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x200003efb980 with size: 0.000183 MiB 00:11:15.013 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:11:15.013 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:11:15.014 element at address: 0x200028265740 with size: 0.000183 MiB 00:11:15.014 element at address: 0x200028265800 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826c400 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826c600 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826c780 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826c840 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826c900 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d080 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d140 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d200 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d380 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d440 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d500 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d680 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d740 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d800 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826d980 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826da40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826db00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826de00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826df80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e040 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e100 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e280 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e340 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e400 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e580 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e640 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e700 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e880 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826e940 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f000 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f180 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f240 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f300 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f480 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f540 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f600 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f780 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f840 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f900 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:11:15.014 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:11:15.014 list of memzone associated elements. size: 607.928894 MiB 00:11:15.014 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:11:15.014 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:15.014 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:11:15.014 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:15.014 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:11:15.014 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60333_0 00:11:15.014 element at address: 0x200000dff380 with size: 48.003052 MiB 00:11:15.014 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60333_0 00:11:15.014 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:11:15.014 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60333_0 00:11:15.014 element at address: 0x2000199be940 with size: 20.255554 MiB 00:11:15.014 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:15.015 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:11:15.015 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:15.015 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:11:15.015 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60333_0 00:11:15.015 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:11:15.015 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60333 00:11:15.015 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:15.015 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60333 00:11:15.015 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:11:15.015 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:15.015 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:11:15.015 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:15.015 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:11:15.015 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:15.015 element at address: 0x200003efba40 with size: 1.008118 MiB 00:11:15.015 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:15.015 element at address: 0x200000cff180 with size: 1.000488 MiB 00:11:15.015 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60333 00:11:15.015 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:11:15.015 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60333 00:11:15.015 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:11:15.015 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60333 00:11:15.015 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:11:15.015 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60333 00:11:15.015 element at address: 0x20000087f740 with size: 0.500488 MiB 00:11:15.015 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60333 00:11:15.015 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:11:15.015 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60333 00:11:15.015 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:11:15.015 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:15.015 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:11:15.015 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:15.015 element at address: 0x20001987c540 with size: 0.250488 MiB 00:11:15.015 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:15.015 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:11:15.015 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60333 00:11:15.015 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:11:15.015 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60333 00:11:15.015 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:11:15.015 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:15.015 element at address: 0x2000282658c0 with size: 0.023743 MiB 00:11:15.015 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:15.015 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:11:15.015 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60333 00:11:15.015 element at address: 0x20002826ba00 with size: 0.002441 MiB 00:11:15.015 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:15.015 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:11:15.015 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60333 00:11:15.015 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:11:15.015 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60333 00:11:15.015 element at address: 0x20000085a840 with size: 0.000305 MiB 00:11:15.015 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60333 00:11:15.015 element at address: 0x20002826c4c0 with size: 0.000305 MiB 00:11:15.015 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:15.015 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:15.015 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60333 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60333 ']' 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60333 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60333 00:11:15.015 killing process with pid 60333 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60333' 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60333 00:11:15.015 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60333 00:11:15.285 00:11:15.285 real 0m1.095s 00:11:15.285 user 0m1.332s 00:11:15.285 sys 0m0.323s 00:11:15.285 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.285 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:15.285 ************************************ 00:11:15.285 END TEST dpdk_mem_utility 00:11:15.285 ************************************ 00:11:15.285 13:05:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:15.285 13:05:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:15.285 13:05:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.285 13:05:21 -- common/autotest_common.sh@10 -- # set +x 00:11:15.285 ************************************ 00:11:15.285 START TEST event 00:11:15.285 ************************************ 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:15.285 * Looking for test storage... 00:11:15.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1711 -- # lcov --version 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:15.285 13:05:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.285 13:05:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.285 13:05:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.285 13:05:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.285 13:05:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.285 13:05:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.285 13:05:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.285 13:05:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.285 13:05:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.285 13:05:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.285 13:05:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.285 13:05:21 event -- scripts/common.sh@344 -- # case "$op" in 00:11:15.285 13:05:21 event -- scripts/common.sh@345 -- # : 1 00:11:15.285 13:05:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.285 13:05:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.285 13:05:21 event -- scripts/common.sh@365 -- # decimal 1 00:11:15.285 13:05:21 event -- scripts/common.sh@353 -- # local d=1 00:11:15.285 13:05:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.285 13:05:21 event -- scripts/common.sh@355 -- # echo 1 00:11:15.285 13:05:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.285 13:05:21 event -- scripts/common.sh@366 -- # decimal 2 00:11:15.285 13:05:21 event -- scripts/common.sh@353 -- # local d=2 00:11:15.285 13:05:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.285 13:05:21 event -- scripts/common.sh@355 -- # echo 2 00:11:15.285 13:05:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.285 13:05:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.285 13:05:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.285 13:05:21 event -- scripts/common.sh@368 -- # return 0 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.285 --rc genhtml_branch_coverage=1 00:11:15.285 --rc genhtml_function_coverage=1 00:11:15.285 --rc genhtml_legend=1 00:11:15.285 --rc geninfo_all_blocks=1 00:11:15.285 --rc geninfo_unexecuted_blocks=1 00:11:15.285 00:11:15.285 ' 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.285 --rc genhtml_branch_coverage=1 00:11:15.285 --rc genhtml_function_coverage=1 00:11:15.285 --rc genhtml_legend=1 00:11:15.285 --rc geninfo_all_blocks=1 00:11:15.285 --rc geninfo_unexecuted_blocks=1 00:11:15.285 00:11:15.285 ' 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.285 --rc genhtml_branch_coverage=1 00:11:15.285 --rc genhtml_function_coverage=1 00:11:15.285 --rc genhtml_legend=1 00:11:15.285 --rc geninfo_all_blocks=1 00:11:15.285 --rc geninfo_unexecuted_blocks=1 00:11:15.285 00:11:15.285 ' 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.285 --rc genhtml_branch_coverage=1 00:11:15.285 --rc genhtml_function_coverage=1 00:11:15.285 --rc genhtml_legend=1 00:11:15.285 --rc geninfo_all_blocks=1 00:11:15.285 --rc geninfo_unexecuted_blocks=1 00:11:15.285 00:11:15.285 ' 00:11:15.285 13:05:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:15.285 13:05:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:15.285 13:05:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:15.285 13:05:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.285 13:05:21 event -- common/autotest_common.sh@10 -- # set +x 00:11:15.285 ************************************ 00:11:15.285 START TEST event_perf 00:11:15.285 ************************************ 00:11:15.285 13:05:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:15.542 Running I/O for 1 seconds...[2024-12-06 13:05:21.817826] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:15.542 [2024-12-06 13:05:21.817908] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60417 ] 00:11:15.542 [2024-12-06 13:05:21.964358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.542 [2024-12-06 13:05:22.002540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.542 [2024-12-06 13:05:22.002620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.543 [2024-12-06 13:05:22.002696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.543 [2024-12-06 13:05:22.002699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.915 Running I/O for 1 seconds... 00:11:16.915 lcore 0: 190037 00:11:16.915 lcore 1: 190032 00:11:16.915 lcore 2: 190034 00:11:16.915 lcore 3: 190036 00:11:16.915 done. 00:11:16.915 ************************************ 00:11:16.915 END TEST event_perf 00:11:16.915 ************************************ 00:11:16.915 00:11:16.915 real 0m1.254s 00:11:16.915 user 0m4.077s 00:11:16.915 sys 0m0.044s 00:11:16.915 13:05:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.915 13:05:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:16.915 13:05:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:16.915 13:05:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.915 13:05:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.915 13:05:23 event -- common/autotest_common.sh@10 -- # set +x 00:11:16.915 ************************************ 00:11:16.915 START TEST event_reactor 00:11:16.915 ************************************ 00:11:16.915 13:05:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:16.915 [2024-12-06 13:05:23.114558] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:16.915 [2024-12-06 13:05:23.114685] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60450 ] 00:11:16.915 [2024-12-06 13:05:23.288992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.916 [2024-12-06 13:05:23.321808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.850 test_start 00:11:17.850 oneshot 00:11:17.850 tick 100 00:11:17.850 tick 100 00:11:17.850 tick 250 00:11:17.850 tick 100 00:11:17.850 tick 100 00:11:17.850 tick 100 00:11:17.850 tick 250 00:11:17.850 tick 500 00:11:17.850 tick 100 00:11:17.850 tick 100 00:11:17.850 tick 250 00:11:17.850 tick 100 00:11:17.850 tick 100 00:11:17.850 test_end 00:11:17.850 ************************************ 00:11:17.850 END TEST event_reactor 00:11:17.850 ************************************ 00:11:17.850 00:11:17.850 real 0m1.268s 00:11:17.850 user 0m1.117s 00:11:17.850 sys 0m0.042s 00:11:17.850 13:05:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.850 13:05:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:18.108 13:05:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:18.108 13:05:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:18.108 13:05:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.108 13:05:24 event -- common/autotest_common.sh@10 -- # set +x 00:11:18.108 ************************************ 00:11:18.108 START TEST event_reactor_perf 00:11:18.108 ************************************ 00:11:18.108 13:05:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:18.108 [2024-12-06 13:05:24.424965] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:18.108 [2024-12-06 13:05:24.425061] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60480 ] 00:11:18.108 [2024-12-06 13:05:24.567146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.108 [2024-12-06 13:05:24.602277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.480 test_start 00:11:19.480 test_end 00:11:19.480 Performance: 356417 events per second 00:11:19.480 00:11:19.480 real 0m1.240s 00:11:19.480 user 0m1.095s 00:11:19.480 sys 0m0.037s 00:11:19.480 ************************************ 00:11:19.480 END TEST event_reactor_perf 00:11:19.480 ************************************ 00:11:19.480 13:05:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.480 13:05:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:19.480 13:05:25 event -- event/event.sh@49 -- # uname -s 00:11:19.480 13:05:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:19.480 13:05:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:19.480 13:05:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.480 13:05:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.480 13:05:25 event -- common/autotest_common.sh@10 -- # set +x 00:11:19.480 ************************************ 00:11:19.480 START TEST event_scheduler 00:11:19.480 ************************************ 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:19.480 * Looking for test storage... 00:11:19.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.480 13:05:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.480 --rc genhtml_branch_coverage=1 00:11:19.480 --rc genhtml_function_coverage=1 00:11:19.480 --rc genhtml_legend=1 00:11:19.480 --rc geninfo_all_blocks=1 00:11:19.480 --rc geninfo_unexecuted_blocks=1 00:11:19.480 00:11:19.480 ' 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.480 --rc genhtml_branch_coverage=1 00:11:19.480 --rc genhtml_function_coverage=1 00:11:19.480 --rc genhtml_legend=1 00:11:19.480 --rc geninfo_all_blocks=1 00:11:19.480 --rc geninfo_unexecuted_blocks=1 00:11:19.480 00:11:19.480 ' 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.480 --rc genhtml_branch_coverage=1 00:11:19.480 --rc genhtml_function_coverage=1 00:11:19.480 --rc genhtml_legend=1 00:11:19.480 --rc geninfo_all_blocks=1 00:11:19.480 --rc geninfo_unexecuted_blocks=1 00:11:19.480 00:11:19.480 ' 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.480 --rc genhtml_branch_coverage=1 00:11:19.480 --rc genhtml_function_coverage=1 00:11:19.480 --rc genhtml_legend=1 00:11:19.480 --rc geninfo_all_blocks=1 00:11:19.480 --rc geninfo_unexecuted_blocks=1 00:11:19.480 00:11:19.480 ' 00:11:19.480 13:05:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:19.480 13:05:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60555 00:11:19.480 13:05:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:19.480 13:05:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:19.480 13:05:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60555 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60555 ']' 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.480 13:05:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:19.480 [2024-12-06 13:05:25.965057] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:19.480 [2024-12-06 13:05:25.965450] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60555 ] 00:11:19.738 [2024-12-06 13:05:26.119622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.738 [2024-12-06 13:05:26.165414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.738 [2024-12-06 13:05:26.165496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.738 [2024-12-06 13:05:26.166138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.738 [2024-12-06 13:05:26.166149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:11:19.997 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:19.997 POWER: Cannot set governor of lcore 0 to userspace 00:11:19.997 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:19.997 POWER: Cannot set governor of lcore 0 to performance 00:11:19.997 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:19.997 POWER: Cannot set governor of lcore 0 to userspace 00:11:19.997 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:19.997 POWER: Cannot set governor of lcore 0 to userspace 00:11:19.997 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:19.997 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:19.997 POWER: Unable to set Power Management Environment for lcore 0 00:11:19.997 [2024-12-06 13:05:26.280468] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:11:19.997 [2024-12-06 13:05:26.280489] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:11:19.997 [2024-12-06 13:05:26.280502] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:19.997 [2024-12-06 13:05:26.280519] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:19.997 [2024-12-06 13:05:26.280530] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:19.997 [2024-12-06 13:05:26.280540] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 [2024-12-06 13:05:26.345858] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 ************************************ 00:11:19.997 START TEST scheduler_create_thread 00:11:19.997 ************************************ 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 2 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 3 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 4 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 5 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 6 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 7 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 8 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 9 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 10 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.997 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:20.562 ************************************ 00:11:20.562 END TEST scheduler_create_thread 00:11:20.562 ************************************ 00:11:20.562 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.562 00:11:20.562 real 0m0.591s 00:11:20.562 user 0m0.018s 00:11:20.562 sys 0m0.005s 00:11:20.562 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.562 13:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:20.562 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:20.562 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60555 00:11:20.562 13:05:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60555 ']' 00:11:20.562 13:05:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60555 00:11:20.562 13:05:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:20.563 13:05:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.563 13:05:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60555 00:11:20.563 killing process with pid 60555 00:11:20.563 13:05:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:20.563 13:05:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:20.563 13:05:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60555' 00:11:20.563 13:05:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60555 00:11:20.563 13:05:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60555 00:11:21.128 [2024-12-06 13:05:27.428231] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:21.128 ************************************ 00:11:21.128 END TEST event_scheduler 00:11:21.128 ************************************ 00:11:21.128 00:11:21.128 real 0m1.897s 00:11:21.128 user 0m2.464s 00:11:21.128 sys 0m0.304s 00:11:21.128 13:05:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.128 13:05:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:21.128 13:05:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:21.128 13:05:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:21.128 13:05:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.128 13:05:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.128 13:05:27 event -- common/autotest_common.sh@10 -- # set +x 00:11:21.128 ************************************ 00:11:21.128 START TEST app_repeat 00:11:21.128 ************************************ 00:11:21.128 13:05:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:21.128 Process app_repeat pid: 60631 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60631 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60631' 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:21.128 spdk_app_start Round 0 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:21.128 13:05:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60631 /var/tmp/spdk-nbd.sock 00:11:21.128 13:05:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60631 ']' 00:11:21.128 13:05:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:21.128 13:05:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:21.128 13:05:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:21.128 13:05:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.128 13:05:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:21.387 [2024-12-06 13:05:27.660097] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:21.387 [2024-12-06 13:05:27.660196] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60631 ] 00:11:21.387 [2024-12-06 13:05:27.819472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.387 [2024-12-06 13:05:27.861575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.387 [2024-12-06 13:05:27.861586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.644 13:05:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.644 13:05:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:21.644 13:05:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:21.903 Malloc0 00:11:21.903 13:05:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:22.233 Malloc1 00:11:22.233 13:05:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:22.233 13:05:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.234 13:05:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:22.508 /dev/nbd0 00:11:22.508 13:05:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:22.508 13:05:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:22.508 1+0 records in 00:11:22.508 1+0 records out 00:11:22.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309316 s, 13.2 MB/s 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.508 13:05:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:22.508 13:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.508 13:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.508 13:05:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:22.766 /dev/nbd1 00:11:22.766 13:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:22.766 13:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:22.766 1+0 records in 00:11:22.766 1+0 records out 00:11:22.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320512 s, 12.8 MB/s 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.766 13:05:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:22.766 13:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.766 13:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.766 13:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:22.766 13:05:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.766 13:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:23.331 { 00:11:23.331 "bdev_name": "Malloc0", 00:11:23.331 "nbd_device": "/dev/nbd0" 00:11:23.331 }, 00:11:23.331 { 00:11:23.331 "bdev_name": "Malloc1", 00:11:23.331 "nbd_device": "/dev/nbd1" 00:11:23.331 } 00:11:23.331 ]' 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:23.331 { 00:11:23.331 "bdev_name": "Malloc0", 00:11:23.331 "nbd_device": "/dev/nbd0" 00:11:23.331 }, 00:11:23.331 { 00:11:23.331 "bdev_name": "Malloc1", 00:11:23.331 "nbd_device": "/dev/nbd1" 00:11:23.331 } 00:11:23.331 ]' 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:23.331 /dev/nbd1' 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:23.331 /dev/nbd1' 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:23.331 13:05:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:23.332 256+0 records in 00:11:23.332 256+0 records out 00:11:23.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00859892 s, 122 MB/s 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:23.332 256+0 records in 00:11:23.332 256+0 records out 00:11:23.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241491 s, 43.4 MB/s 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:23.332 256+0 records in 00:11:23.332 256+0 records out 00:11:23.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288036 s, 36.4 MB/s 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.332 13:05:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.897 13:05:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.155 13:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:24.413 13:05:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:24.413 13:05:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:24.979 13:05:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:24.979 [2024-12-06 13:05:31.296168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:24.979 [2024-12-06 13:05:31.331773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.979 [2024-12-06 13:05:31.331783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.979 [2024-12-06 13:05:31.363370] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:24.979 [2024-12-06 13:05:31.363436] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:28.254 13:05:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:28.254 spdk_app_start Round 1 00:11:28.254 13:05:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:28.254 13:05:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60631 /var/tmp/spdk-nbd.sock 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60631 ']' 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.254 13:05:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:28.254 13:05:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:28.512 Malloc0 00:11:28.512 13:05:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:28.769 Malloc1 00:11:28.769 13:05:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.769 13:05:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:29.025 /dev/nbd0 00:11:29.026 13:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:29.026 13:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:29.026 1+0 records in 00:11:29.026 1+0 records out 00:11:29.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030435 s, 13.5 MB/s 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:29.026 13:05:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:29.026 13:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.026 13:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.026 13:05:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:29.283 /dev/nbd1 00:11:29.283 13:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:29.283 13:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:29.283 13:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:29.284 13:05:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:29.284 1+0 records in 00:11:29.284 1+0 records out 00:11:29.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031258 s, 13.1 MB/s 00:11:29.284 13:05:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:29.284 13:05:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:29.284 13:05:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:29.284 13:05:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:29.284 13:05:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:29.284 13:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.284 13:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.284 13:05:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:29.284 13:05:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.284 13:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:29.934 { 00:11:29.934 "bdev_name": "Malloc0", 00:11:29.934 "nbd_device": "/dev/nbd0" 00:11:29.934 }, 00:11:29.934 { 00:11:29.934 "bdev_name": "Malloc1", 00:11:29.934 "nbd_device": "/dev/nbd1" 00:11:29.934 } 00:11:29.934 ]' 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:29.934 { 00:11:29.934 "bdev_name": "Malloc0", 00:11:29.934 "nbd_device": "/dev/nbd0" 00:11:29.934 }, 00:11:29.934 { 00:11:29.934 "bdev_name": "Malloc1", 00:11:29.934 "nbd_device": "/dev/nbd1" 00:11:29.934 } 00:11:29.934 ]' 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:29.934 /dev/nbd1' 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:29.934 /dev/nbd1' 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.934 13:05:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:29.935 256+0 records in 00:11:29.935 256+0 records out 00:11:29.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00937467 s, 112 MB/s 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:29.935 256+0 records in 00:11:29.935 256+0 records out 00:11:29.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293755 s, 35.7 MB/s 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:29.935 256+0 records in 00:11:29.935 256+0 records out 00:11:29.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278139 s, 37.7 MB/s 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.935 13:05:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.193 13:05:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.450 13:05:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.451 13:05:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.451 13:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:31.016 13:05:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:31.016 13:05:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:31.275 13:05:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:31.534 [2024-12-06 13:05:37.831418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:31.534 [2024-12-06 13:05:37.867117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.534 [2024-12-06 13:05:37.867128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.534 [2024-12-06 13:05:37.900156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:31.534 [2024-12-06 13:05:37.900224] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:34.817 spdk_app_start Round 2 00:11:34.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:34.817 13:05:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:34.817 13:05:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:34.817 13:05:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60631 /var/tmp/spdk-nbd.sock 00:11:34.817 13:05:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60631 ']' 00:11:34.817 13:05:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:34.817 13:05:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.817 13:05:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:34.817 13:05:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.817 13:05:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:34.817 13:05:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.818 13:05:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:34.818 13:05:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:35.075 Malloc0 00:11:35.075 13:05:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:35.333 Malloc1 00:11:35.333 13:05:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.333 13:05:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:35.334 13:05:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:35.334 13:05:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.334 13:05:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:35.334 13:05:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:35.334 13:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:35.334 13:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.334 13:05:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:35.592 /dev/nbd0 00:11:35.592 13:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:35.592 13:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:35.592 1+0 records in 00:11:35.592 1+0 records out 00:11:35.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282449 s, 14.5 MB/s 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:35.592 13:05:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:35.592 13:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.592 13:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.592 13:05:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:36.158 /dev/nbd1 00:11:36.158 13:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:36.158 13:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:36.158 1+0 records in 00:11:36.158 1+0 records out 00:11:36.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401344 s, 10.2 MB/s 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:36.158 13:05:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:36.158 13:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.158 13:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:36.158 13:05:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.158 13:05:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.158 13:05:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:36.417 { 00:11:36.417 "bdev_name": "Malloc0", 00:11:36.417 "nbd_device": "/dev/nbd0" 00:11:36.417 }, 00:11:36.417 { 00:11:36.417 "bdev_name": "Malloc1", 00:11:36.417 "nbd_device": "/dev/nbd1" 00:11:36.417 } 00:11:36.417 ]' 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:36.417 { 00:11:36.417 "bdev_name": "Malloc0", 00:11:36.417 "nbd_device": "/dev/nbd0" 00:11:36.417 }, 00:11:36.417 { 00:11:36.417 "bdev_name": "Malloc1", 00:11:36.417 "nbd_device": "/dev/nbd1" 00:11:36.417 } 00:11:36.417 ]' 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:36.417 /dev/nbd1' 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:36.417 /dev/nbd1' 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:36.417 256+0 records in 00:11:36.417 256+0 records out 00:11:36.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00951709 s, 110 MB/s 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:36.417 256+0 records in 00:11:36.417 256+0 records out 00:11:36.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259215 s, 40.5 MB/s 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:36.417 13:05:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:36.675 256+0 records in 00:11:36.675 256+0 records out 00:11:36.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264298 s, 39.7 MB/s 00:11:36.675 13:05:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:36.675 13:05:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.676 13:05:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.934 13:05:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.193 13:05:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:37.450 13:05:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:37.450 13:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:37.450 13:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:37.707 13:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:37.707 13:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:37.707 13:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:37.708 13:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:37.708 13:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:37.708 13:05:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:37.708 13:05:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:37.708 13:05:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:37.708 13:05:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:37.708 13:05:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:37.965 13:05:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:37.965 [2024-12-06 13:05:44.418534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:37.965 [2024-12-06 13:05:44.451880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.965 [2024-12-06 13:05:44.451892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.965 [2024-12-06 13:05:44.482100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:37.965 [2024-12-06 13:05:44.482160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:41.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:41.246 13:05:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60631 /var/tmp/spdk-nbd.sock 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60631 ']' 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:41.246 13:05:47 event.app_repeat -- event/event.sh@39 -- # killprocess 60631 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60631 ']' 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60631 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60631 00:11:41.246 killing process with pid 60631 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60631' 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60631 00:11:41.246 13:05:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60631 00:11:41.505 spdk_app_start is called in Round 0. 00:11:41.505 Shutdown signal received, stop current app iteration 00:11:41.505 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:11:41.505 spdk_app_start is called in Round 1. 00:11:41.505 Shutdown signal received, stop current app iteration 00:11:41.505 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:11:41.505 spdk_app_start is called in Round 2. 00:11:41.505 Shutdown signal received, stop current app iteration 00:11:41.505 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:11:41.505 spdk_app_start is called in Round 3. 00:11:41.505 Shutdown signal received, stop current app iteration 00:11:41.505 ************************************ 00:11:41.505 END TEST app_repeat 00:11:41.505 ************************************ 00:11:41.505 13:05:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:41.505 13:05:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:41.505 00:11:41.505 real 0m20.233s 00:11:41.505 user 0m47.183s 00:11:41.505 sys 0m2.946s 00:11:41.505 13:05:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.505 13:05:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:41.505 13:05:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:41.505 13:05:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:41.505 13:05:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.505 13:05:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.505 13:05:47 event -- common/autotest_common.sh@10 -- # set +x 00:11:41.505 ************************************ 00:11:41.505 START TEST cpu_locks 00:11:41.505 ************************************ 00:11:41.505 13:05:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:41.505 * Looking for test storage... 00:11:41.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:41.505 13:05:47 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:41.505 13:05:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:41.505 13:05:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:41.764 13:05:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.764 13:05:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:41.764 13:05:48 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.764 13:05:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:41.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.764 --rc genhtml_branch_coverage=1 00:11:41.764 --rc genhtml_function_coverage=1 00:11:41.764 --rc genhtml_legend=1 00:11:41.764 --rc geninfo_all_blocks=1 00:11:41.764 --rc geninfo_unexecuted_blocks=1 00:11:41.764 00:11:41.764 ' 00:11:41.764 13:05:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:41.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.764 --rc genhtml_branch_coverage=1 00:11:41.764 --rc genhtml_function_coverage=1 00:11:41.764 --rc genhtml_legend=1 00:11:41.764 --rc geninfo_all_blocks=1 00:11:41.764 --rc geninfo_unexecuted_blocks=1 00:11:41.764 00:11:41.764 ' 00:11:41.765 13:05:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:41.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.765 --rc genhtml_branch_coverage=1 00:11:41.765 --rc genhtml_function_coverage=1 00:11:41.765 --rc genhtml_legend=1 00:11:41.765 --rc geninfo_all_blocks=1 00:11:41.765 --rc geninfo_unexecuted_blocks=1 00:11:41.765 00:11:41.765 ' 00:11:41.765 13:05:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:41.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.765 --rc genhtml_branch_coverage=1 00:11:41.765 --rc genhtml_function_coverage=1 00:11:41.765 --rc genhtml_legend=1 00:11:41.765 --rc geninfo_all_blocks=1 00:11:41.765 --rc geninfo_unexecuted_blocks=1 00:11:41.765 00:11:41.765 ' 00:11:41.765 13:05:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:41.765 13:05:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:41.765 13:05:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:41.765 13:05:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:41.765 13:05:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.765 13:05:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.765 13:05:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:41.765 ************************************ 00:11:41.765 START TEST default_locks 00:11:41.765 ************************************ 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61276 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61276 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61276 ']' 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:41.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.765 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:41.765 [2024-12-06 13:05:48.173559] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:41.765 [2024-12-06 13:05:48.173660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61276 ] 00:11:42.038 [2024-12-06 13:05:48.319687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.038 [2024-12-06 13:05:48.353926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.038 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.038 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:42.038 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61276 00:11:42.038 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61276 00:11:42.038 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61276 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61276 ']' 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61276 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61276 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.634 killing process with pid 61276 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61276' 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61276 00:11:42.634 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61276 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61276 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61276 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61276 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61276 ']' 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:42.893 ERROR: process (pid: 61276) is no longer running 00:11:42.893 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61276) - No such process 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:42.893 00:11:42.893 real 0m1.085s 00:11:42.893 user 0m1.180s 00:11:42.893 sys 0m0.395s 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.893 13:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:42.893 ************************************ 00:11:42.893 END TEST default_locks 00:11:42.893 ************************************ 00:11:42.893 13:05:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:42.893 13:05:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.893 13:05:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.893 13:05:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:42.893 ************************************ 00:11:42.893 START TEST default_locks_via_rpc 00:11:42.893 ************************************ 00:11:42.893 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:42.893 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61321 00:11:42.893 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61321 00:11:42.893 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:42.893 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61321 ']' 00:11:42.894 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.894 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.894 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.894 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.894 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.894 [2024-12-06 13:05:49.309375] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:42.894 [2024-12-06 13:05:49.309482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61321 ] 00:11:43.152 [2024-12-06 13:05:49.457970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.152 [2024-12-06 13:05:49.496462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61321 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61321 00:11:44.087 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61321 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61321 ']' 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61321 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61321 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61321' 00:11:44.346 killing process with pid 61321 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61321 00:11:44.346 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61321 00:11:44.604 00:11:44.604 real 0m1.774s 00:11:44.604 user 0m2.100s 00:11:44.604 sys 0m0.455s 00:11:44.604 13:05:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.604 13:05:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.604 ************************************ 00:11:44.604 END TEST default_locks_via_rpc 00:11:44.604 ************************************ 00:11:44.604 13:05:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:44.604 13:05:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.604 13:05:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.604 13:05:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:44.604 ************************************ 00:11:44.604 START TEST non_locking_app_on_locked_coremask 00:11:44.605 ************************************ 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61384 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61384 /var/tmp/spdk.sock 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61384 ']' 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.605 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.605 [2024-12-06 13:05:51.121768] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:44.605 [2024-12-06 13:05:51.121875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61384 ] 00:11:44.863 [2024-12-06 13:05:51.266286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.863 [2024-12-06 13:05:51.298777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61399 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61399 /var/tmp/spdk2.sock 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61399 ']' 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:45.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.122 13:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:45.122 [2024-12-06 13:05:51.534343] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:45.122 [2024-12-06 13:05:51.534441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:11:45.381 [2024-12-06 13:05:51.698724] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:45.381 [2024-12-06 13:05:51.698784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.381 [2024-12-06 13:05:51.763601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.317 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.317 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:46.317 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61384 00:11:46.317 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:46.317 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61384 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61384 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61384 ']' 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61384 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61384 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.883 killing process with pid 61384 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61384' 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61384 00:11:46.883 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61384 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61399 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61399 ']' 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61399 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61399 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.450 killing process with pid 61399 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61399' 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61399 00:11:47.450 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61399 00:11:47.708 00:11:47.708 real 0m3.057s 00:11:47.708 user 0m3.612s 00:11:47.708 sys 0m0.875s 00:11:47.708 13:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.708 13:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.708 ************************************ 00:11:47.708 END TEST non_locking_app_on_locked_coremask 00:11:47.708 ************************************ 00:11:47.708 13:05:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:47.708 13:05:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.708 13:05:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.708 13:05:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:47.708 ************************************ 00:11:47.708 START TEST locking_app_on_unlocked_coremask 00:11:47.708 ************************************ 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61480 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61480 /var/tmp/spdk.sock 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61480 ']' 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.709 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.970 [2024-12-06 13:05:54.239631] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:47.970 [2024-12-06 13:05:54.239725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:11:47.970 [2024-12-06 13:05:54.392479] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:47.970 [2024-12-06 13:05:54.392559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.970 [2024-12-06 13:05:54.431771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61489 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61489 /var/tmp/spdk2.sock 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61489 ']' 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.228 13:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.228 [2024-12-06 13:05:54.672667] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:48.228 [2024-12-06 13:05:54.672752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61489 ] 00:11:48.486 [2024-12-06 13:05:54.833519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.486 [2024-12-06 13:05:54.898908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.742 13:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.742 13:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:48.742 13:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61489 00:11:48.742 13:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61489 00:11:48.742 13:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61480 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61480 ']' 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61480 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61480 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61480' 00:11:49.677 killing process with pid 61480 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61480 00:11:49.677 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61480 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61489 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61489 ']' 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61489 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61489 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.243 killing process with pid 61489 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61489' 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61489 00:11:50.243 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61489 00:11:50.501 00:11:50.501 real 0m2.611s 00:11:50.501 user 0m2.958s 00:11:50.501 sys 0m0.884s 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:50.501 ************************************ 00:11:50.501 END TEST locking_app_on_unlocked_coremask 00:11:50.501 ************************************ 00:11:50.501 13:05:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:50.501 13:05:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.501 13:05:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.501 13:05:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.501 ************************************ 00:11:50.501 START TEST locking_app_on_locked_coremask 00:11:50.501 ************************************ 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61554 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61554 /var/tmp/spdk.sock 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61554 ']' 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.501 13:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:50.501 [2024-12-06 13:05:56.911706] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:50.501 [2024-12-06 13:05:56.911815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61554 ] 00:11:50.758 [2024-12-06 13:05:57.062299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.758 [2024-12-06 13:05:57.101651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.701 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.701 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61582 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61582 /var/tmp/spdk2.sock 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61582 /var/tmp/spdk2.sock 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61582 /var/tmp/spdk2.sock 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61582 ']' 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.702 13:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:51.702 [2024-12-06 13:05:57.961846] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:51.702 [2024-12-06 13:05:57.961969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61582 ] 00:11:51.702 [2024-12-06 13:05:58.125502] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61554 has claimed it. 00:11:51.702 [2024-12-06 13:05:58.125573] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:52.267 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61582) - No such process 00:11:52.267 ERROR: process (pid: 61582) is no longer running 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61554 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61554 00:11:52.267 13:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61554 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61554 ']' 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61554 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61554 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.832 killing process with pid 61554 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61554' 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61554 00:11:52.832 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61554 00:11:53.091 00:11:53.091 real 0m2.579s 00:11:53.091 user 0m3.146s 00:11:53.091 sys 0m0.531s 00:11:53.091 ************************************ 00:11:53.091 END TEST locking_app_on_locked_coremask 00:11:53.091 ************************************ 00:11:53.091 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.091 13:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:53.091 13:05:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:53.091 13:05:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:53.091 13:05:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.091 13:05:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:53.091 ************************************ 00:11:53.091 START TEST locking_overlapped_coremask 00:11:53.091 ************************************ 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61634 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61634 /var/tmp/spdk.sock 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61634 ']' 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.091 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:53.091 [2024-12-06 13:05:59.527694] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:53.091 [2024-12-06 13:05:59.527795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61634 ] 00:11:53.349 [2024-12-06 13:05:59.677762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:53.349 [2024-12-06 13:05:59.720979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.349 [2024-12-06 13:05:59.720987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.349 [2024-12-06 13:05:59.720817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.606 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.606 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61646 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61646 /var/tmp/spdk2.sock 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61646 /var/tmp/spdk2.sock 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61646 /var/tmp/spdk2.sock 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61646 ']' 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.607 13:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:53.607 [2024-12-06 13:06:00.008023] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:53.607 [2024-12-06 13:06:00.008814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61646 ] 00:11:53.865 [2024-12-06 13:06:00.189217] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61634 has claimed it. 00:11:53.865 [2024-12-06 13:06:00.189292] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:54.431 ERROR: process (pid: 61646) is no longer running 00:11:54.431 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61646) - No such process 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61634 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61634 ']' 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61634 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61634 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.431 killing process with pid 61634 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61634' 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61634 00:11:54.431 13:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61634 00:11:54.689 00:11:54.689 real 0m1.574s 00:11:54.689 user 0m4.391s 00:11:54.689 sys 0m0.334s 00:11:54.689 13:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.689 ************************************ 00:11:54.689 END TEST locking_overlapped_coremask 00:11:54.689 ************************************ 00:11:54.689 13:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.689 13:06:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:54.689 13:06:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:54.689 13:06:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.689 13:06:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:54.689 ************************************ 00:11:54.689 START TEST locking_overlapped_coremask_via_rpc 00:11:54.689 ************************************ 00:11:54.689 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:54.689 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61696 00:11:54.689 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61696 /var/tmp/spdk.sock 00:11:54.690 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61696 ']' 00:11:54.690 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:54.690 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.690 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.690 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.690 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.690 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.690 [2024-12-06 13:06:01.127585] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:54.690 [2024-12-06 13:06:01.127693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:11:54.948 [2024-12-06 13:06:01.283270] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:54.948 [2024-12-06 13:06:01.283368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.948 [2024-12-06 13:06:01.344555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.948 [2024-12-06 13:06:01.344644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.948 [2024-12-06 13:06:01.344667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61713 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61713 /var/tmp/spdk2.sock 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61713 ']' 00:11:55.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.205 13:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.205 [2024-12-06 13:06:01.639138] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:55.205 [2024-12-06 13:06:01.639254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61713 ] 00:11:55.463 [2024-12-06 13:06:01.805988] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:55.463 [2024-12-06 13:06:01.806053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.463 [2024-12-06 13:06:01.878535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.463 [2024-12-06 13:06:01.882052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:55.463 [2024-12-06 13:06:01.882054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 [2024-12-06 13:06:02.753168] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61696 has claimed it. 00:11:56.400 2024/12/06 13:06:02 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:11:56.400 request: 00:11:56.400 { 00:11:56.400 "method": "framework_enable_cpumask_locks", 00:11:56.400 "params": {} 00:11:56.400 } 00:11:56.400 Got JSON-RPC error response 00:11:56.400 GoRPCClient: error on JSON-RPC call 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61696 /var/tmp/spdk.sock 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61696 ']' 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.400 13:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61713 /var/tmp/spdk2.sock 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61713 ']' 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:56.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.659 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.225 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.225 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:57.225 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:57.225 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:57.225 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:57.226 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:57.226 00:11:57.226 real 0m2.487s 00:11:57.226 user 0m1.567s 00:11:57.226 sys 0m0.202s 00:11:57.226 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.226 13:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.226 ************************************ 00:11:57.226 END TEST locking_overlapped_coremask_via_rpc 00:11:57.226 ************************************ 00:11:57.226 13:06:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:57.226 13:06:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61696 ]] 00:11:57.226 13:06:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61696 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61696 ']' 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61696 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61696 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61696' 00:11:57.226 killing process with pid 61696 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61696 00:11:57.226 13:06:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61696 00:11:57.483 13:06:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61713 ]] 00:11:57.483 13:06:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61713 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61713 ']' 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61713 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61713 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:57.483 killing process with pid 61713 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61713' 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61713 00:11:57.483 13:06:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61713 00:11:57.741 13:06:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:57.741 13:06:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:57.741 13:06:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61696 ]] 00:11:57.741 13:06:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61696 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61696 ']' 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61696 00:11:57.741 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61696) - No such process 00:11:57.741 Process with pid 61696 is not found 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61696 is not found' 00:11:57.741 13:06:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61713 ]] 00:11:57.741 13:06:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61713 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61713 ']' 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61713 00:11:57.741 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61713) - No such process 00:11:57.741 Process with pid 61713 is not found 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61713 is not found' 00:11:57.741 13:06:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:57.741 ************************************ 00:11:57.741 END TEST cpu_locks 00:11:57.741 ************************************ 00:11:57.741 00:11:57.741 real 0m16.244s 00:11:57.741 user 0m31.211s 00:11:57.741 sys 0m4.346s 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.741 13:06:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:57.741 00:11:57.741 real 0m42.569s 00:11:57.741 user 1m27.343s 00:11:57.741 sys 0m7.939s 00:11:57.741 13:06:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.741 13:06:04 event -- common/autotest_common.sh@10 -- # set +x 00:11:57.741 ************************************ 00:11:57.741 END TEST event 00:11:57.741 ************************************ 00:11:57.741 13:06:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:57.741 13:06:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.741 13:06:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.741 13:06:04 -- common/autotest_common.sh@10 -- # set +x 00:11:57.741 ************************************ 00:11:57.741 START TEST thread 00:11:57.741 ************************************ 00:11:57.741 13:06:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:57.999 * Looking for test storage... 00:11:57.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:57.999 13:06:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.999 13:06:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.999 13:06:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.999 13:06:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.999 13:06:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.999 13:06:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.999 13:06:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.999 13:06:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.999 13:06:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.999 13:06:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.999 13:06:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.999 13:06:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:57.999 13:06:04 thread -- scripts/common.sh@345 -- # : 1 00:11:57.999 13:06:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.999 13:06:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.999 13:06:04 thread -- scripts/common.sh@365 -- # decimal 1 00:11:57.999 13:06:04 thread -- scripts/common.sh@353 -- # local d=1 00:11:57.999 13:06:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.999 13:06:04 thread -- scripts/common.sh@355 -- # echo 1 00:11:57.999 13:06:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.999 13:06:04 thread -- scripts/common.sh@366 -- # decimal 2 00:11:57.999 13:06:04 thread -- scripts/common.sh@353 -- # local d=2 00:11:57.999 13:06:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.999 13:06:04 thread -- scripts/common.sh@355 -- # echo 2 00:11:57.999 13:06:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.999 13:06:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.999 13:06:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.999 13:06:04 thread -- scripts/common.sh@368 -- # return 0 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:57.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.999 --rc genhtml_branch_coverage=1 00:11:57.999 --rc genhtml_function_coverage=1 00:11:57.999 --rc genhtml_legend=1 00:11:57.999 --rc geninfo_all_blocks=1 00:11:57.999 --rc geninfo_unexecuted_blocks=1 00:11:57.999 00:11:57.999 ' 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:57.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.999 --rc genhtml_branch_coverage=1 00:11:57.999 --rc genhtml_function_coverage=1 00:11:57.999 --rc genhtml_legend=1 00:11:57.999 --rc geninfo_all_blocks=1 00:11:57.999 --rc geninfo_unexecuted_blocks=1 00:11:57.999 00:11:57.999 ' 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:57.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.999 --rc genhtml_branch_coverage=1 00:11:57.999 --rc genhtml_function_coverage=1 00:11:57.999 --rc genhtml_legend=1 00:11:57.999 --rc geninfo_all_blocks=1 00:11:57.999 --rc geninfo_unexecuted_blocks=1 00:11:57.999 00:11:57.999 ' 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:57.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.999 --rc genhtml_branch_coverage=1 00:11:57.999 --rc genhtml_function_coverage=1 00:11:57.999 --rc genhtml_legend=1 00:11:57.999 --rc geninfo_all_blocks=1 00:11:57.999 --rc geninfo_unexecuted_blocks=1 00:11:57.999 00:11:57.999 ' 00:11:57.999 13:06:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.999 13:06:04 thread -- common/autotest_common.sh@10 -- # set +x 00:11:57.999 ************************************ 00:11:57.999 START TEST thread_poller_perf 00:11:57.999 ************************************ 00:11:57.999 13:06:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:57.999 [2024-12-06 13:06:04.429775] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:57.999 [2024-12-06 13:06:04.430360] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61873 ] 00:11:58.257 [2024-12-06 13:06:04.576670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.257 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:58.257 [2024-12-06 13:06:04.616005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.188 [2024-12-06T13:06:05.716Z] ====================================== 00:11:59.188 [2024-12-06T13:06:05.716Z] busy:2209048333 (cyc) 00:11:59.188 [2024-12-06T13:06:05.716Z] total_run_count: 282000 00:11:59.188 [2024-12-06T13:06:05.716Z] tsc_hz: 2200000000 (cyc) 00:11:59.188 [2024-12-06T13:06:05.717Z] ====================================== 00:11:59.189 [2024-12-06T13:06:05.717Z] poller_cost: 7833 (cyc), 3560 (nsec) 00:11:59.189 00:11:59.189 real 0m1.248s 00:11:59.189 user 0m1.108s 00:11:59.189 sys 0m0.032s 00:11:59.189 13:06:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.189 13:06:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:59.189 ************************************ 00:11:59.189 END TEST thread_poller_perf 00:11:59.189 ************************************ 00:11:59.189 13:06:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:59.189 13:06:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:59.189 13:06:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.189 13:06:05 thread -- common/autotest_common.sh@10 -- # set +x 00:11:59.446 ************************************ 00:11:59.446 START TEST thread_poller_perf 00:11:59.446 ************************************ 00:11:59.446 13:06:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:59.446 [2024-12-06 13:06:05.734398] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:59.446 [2024-12-06 13:06:05.734500] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61903 ] 00:11:59.446 [2024-12-06 13:06:05.887029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.446 [2024-12-06 13:06:05.927212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.446 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:00.821 [2024-12-06T13:06:07.349Z] ====================================== 00:12:00.821 [2024-12-06T13:06:07.349Z] busy:2202431691 (cyc) 00:12:00.821 [2024-12-06T13:06:07.349Z] total_run_count: 3311000 00:12:00.821 [2024-12-06T13:06:07.349Z] tsc_hz: 2200000000 (cyc) 00:12:00.821 [2024-12-06T13:06:07.349Z] ====================================== 00:12:00.821 [2024-12-06T13:06:07.349Z] poller_cost: 665 (cyc), 302 (nsec) 00:12:00.821 00:12:00.821 real 0m1.258s 00:12:00.821 user 0m1.108s 00:12:00.821 sys 0m0.043s 00:12:00.821 13:06:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.821 13:06:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:00.821 ************************************ 00:12:00.821 END TEST thread_poller_perf 00:12:00.821 ************************************ 00:12:00.821 13:06:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:00.821 00:12:00.821 real 0m2.777s 00:12:00.821 user 0m2.364s 00:12:00.821 sys 0m0.194s 00:12:00.821 13:06:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.821 13:06:07 thread -- common/autotest_common.sh@10 -- # set +x 00:12:00.821 ************************************ 00:12:00.821 END TEST thread 00:12:00.821 ************************************ 00:12:00.821 13:06:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:00.821 13:06:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:00.821 13:06:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:00.821 13:06:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.821 13:06:07 -- common/autotest_common.sh@10 -- # set +x 00:12:00.821 ************************************ 00:12:00.821 START TEST app_cmdline 00:12:00.821 ************************************ 00:12:00.821 13:06:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:00.821 * Looking for test storage... 00:12:00.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:00.821 13:06:07 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:00.821 13:06:07 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:00.821 13:06:07 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:12:00.821 13:06:07 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:00.821 13:06:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.821 13:06:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.821 13:06:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.821 13:06:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.822 13:06:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:00.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.822 --rc genhtml_branch_coverage=1 00:12:00.822 --rc genhtml_function_coverage=1 00:12:00.822 --rc genhtml_legend=1 00:12:00.822 --rc geninfo_all_blocks=1 00:12:00.822 --rc geninfo_unexecuted_blocks=1 00:12:00.822 00:12:00.822 ' 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:00.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.822 --rc genhtml_branch_coverage=1 00:12:00.822 --rc genhtml_function_coverage=1 00:12:00.822 --rc genhtml_legend=1 00:12:00.822 --rc geninfo_all_blocks=1 00:12:00.822 --rc geninfo_unexecuted_blocks=1 00:12:00.822 00:12:00.822 ' 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:00.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.822 --rc genhtml_branch_coverage=1 00:12:00.822 --rc genhtml_function_coverage=1 00:12:00.822 --rc genhtml_legend=1 00:12:00.822 --rc geninfo_all_blocks=1 00:12:00.822 --rc geninfo_unexecuted_blocks=1 00:12:00.822 00:12:00.822 ' 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:00.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.822 --rc genhtml_branch_coverage=1 00:12:00.822 --rc genhtml_function_coverage=1 00:12:00.822 --rc genhtml_legend=1 00:12:00.822 --rc geninfo_all_blocks=1 00:12:00.822 --rc geninfo_unexecuted_blocks=1 00:12:00.822 00:12:00.822 ' 00:12:00.822 13:06:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:00.822 13:06:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61984 00:12:00.822 13:06:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:00.822 13:06:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61984 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61984 ']' 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.822 13:06:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 [2024-12-06 13:06:07.305249] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:00.822 [2024-12-06 13:06:07.305841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61984 ] 00:12:01.080 [2024-12-06 13:06:07.464790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.080 [2024-12-06 13:06:07.503841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.337 13:06:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.337 13:06:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:01.337 13:06:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:01.596 { 00:12:01.596 "fields": { 00:12:01.596 "commit": "cf089b398", 00:12:01.596 "major": 25, 00:12:01.596 "minor": 1, 00:12:01.596 "patch": 0, 00:12:01.596 "suffix": "-pre" 00:12:01.596 }, 00:12:01.596 "version": "SPDK v25.01-pre git sha1 cf089b398" 00:12:01.596 } 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:01.596 13:06:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:01.596 13:06:08 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:01.854 2024/12/06 13:06:08 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:12:01.854 request: 00:12:01.854 { 00:12:01.854 "method": "env_dpdk_get_mem_stats", 00:12:01.854 "params": {} 00:12:01.854 } 00:12:01.854 Got JSON-RPC error response 00:12:01.854 GoRPCClient: error on JSON-RPC call 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.854 13:06:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61984 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61984 ']' 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61984 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.854 13:06:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61984 00:12:02.112 13:06:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.112 killing process with pid 61984 00:12:02.112 13:06:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.112 13:06:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61984' 00:12:02.112 13:06:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 61984 00:12:02.112 13:06:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 61984 00:12:02.112 00:12:02.112 real 0m1.566s 00:12:02.112 user 0m2.115s 00:12:02.112 sys 0m0.394s 00:12:02.112 13:06:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.112 13:06:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:02.112 ************************************ 00:12:02.112 END TEST app_cmdline 00:12:02.112 ************************************ 00:12:02.369 13:06:08 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:02.369 13:06:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.369 13:06:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.369 13:06:08 -- common/autotest_common.sh@10 -- # set +x 00:12:02.369 ************************************ 00:12:02.369 START TEST version 00:12:02.369 ************************************ 00:12:02.369 13:06:08 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:02.369 * Looking for test storage... 00:12:02.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:02.369 13:06:08 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.369 13:06:08 version -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.369 13:06:08 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.369 13:06:08 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.369 13:06:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.369 13:06:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.369 13:06:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.369 13:06:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.369 13:06:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.369 13:06:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.369 13:06:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.369 13:06:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.369 13:06:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.369 13:06:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.369 13:06:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.369 13:06:08 version -- scripts/common.sh@344 -- # case "$op" in 00:12:02.369 13:06:08 version -- scripts/common.sh@345 -- # : 1 00:12:02.369 13:06:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.369 13:06:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.369 13:06:08 version -- scripts/common.sh@365 -- # decimal 1 00:12:02.369 13:06:08 version -- scripts/common.sh@353 -- # local d=1 00:12:02.369 13:06:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.369 13:06:08 version -- scripts/common.sh@355 -- # echo 1 00:12:02.369 13:06:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.369 13:06:08 version -- scripts/common.sh@366 -- # decimal 2 00:12:02.369 13:06:08 version -- scripts/common.sh@353 -- # local d=2 00:12:02.369 13:06:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.369 13:06:08 version -- scripts/common.sh@355 -- # echo 2 00:12:02.369 13:06:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.369 13:06:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.369 13:06:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.370 13:06:08 version -- scripts/common.sh@368 -- # return 0 00:12:02.370 13:06:08 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.370 13:06:08 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.370 --rc genhtml_branch_coverage=1 00:12:02.370 --rc genhtml_function_coverage=1 00:12:02.370 --rc genhtml_legend=1 00:12:02.370 --rc geninfo_all_blocks=1 00:12:02.370 --rc geninfo_unexecuted_blocks=1 00:12:02.370 00:12:02.370 ' 00:12:02.370 13:06:08 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.370 --rc genhtml_branch_coverage=1 00:12:02.370 --rc genhtml_function_coverage=1 00:12:02.370 --rc genhtml_legend=1 00:12:02.370 --rc geninfo_all_blocks=1 00:12:02.370 --rc geninfo_unexecuted_blocks=1 00:12:02.370 00:12:02.370 ' 00:12:02.370 13:06:08 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.370 --rc genhtml_branch_coverage=1 00:12:02.370 --rc genhtml_function_coverage=1 00:12:02.370 --rc genhtml_legend=1 00:12:02.370 --rc geninfo_all_blocks=1 00:12:02.370 --rc geninfo_unexecuted_blocks=1 00:12:02.370 00:12:02.370 ' 00:12:02.370 13:06:08 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.370 --rc genhtml_branch_coverage=1 00:12:02.370 --rc genhtml_function_coverage=1 00:12:02.370 --rc genhtml_legend=1 00:12:02.370 --rc geninfo_all_blocks=1 00:12:02.370 --rc geninfo_unexecuted_blocks=1 00:12:02.370 00:12:02.370 ' 00:12:02.370 13:06:08 version -- app/version.sh@17 -- # get_header_version major 00:12:02.370 13:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # cut -f2 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.370 13:06:08 version -- app/version.sh@17 -- # major=25 00:12:02.370 13:06:08 version -- app/version.sh@18 -- # get_header_version minor 00:12:02.370 13:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # cut -f2 00:12:02.370 13:06:08 version -- app/version.sh@18 -- # minor=1 00:12:02.370 13:06:08 version -- app/version.sh@19 -- # get_header_version patch 00:12:02.370 13:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # cut -f2 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.370 13:06:08 version -- app/version.sh@19 -- # patch=0 00:12:02.370 13:06:08 version -- app/version.sh@20 -- # get_header_version suffix 00:12:02.370 13:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # cut -f2 00:12:02.370 13:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.370 13:06:08 version -- app/version.sh@20 -- # suffix=-pre 00:12:02.370 13:06:08 version -- app/version.sh@22 -- # version=25.1 00:12:02.370 13:06:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:02.370 13:06:08 version -- app/version.sh@28 -- # version=25.1rc0 00:12:02.370 13:06:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:02.370 13:06:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:02.628 13:06:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:02.628 13:06:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:02.628 00:12:02.628 real 0m0.254s 00:12:02.628 user 0m0.180s 00:12:02.628 sys 0m0.101s 00:12:02.628 13:06:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.628 13:06:08 version -- common/autotest_common.sh@10 -- # set +x 00:12:02.628 ************************************ 00:12:02.628 END TEST version 00:12:02.628 ************************************ 00:12:02.628 13:06:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:02.628 13:06:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:12:02.628 13:06:08 -- spdk/autotest.sh@194 -- # uname -s 00:12:02.628 13:06:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:02.628 13:06:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:02.628 13:06:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:02.628 13:06:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:12:02.628 13:06:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:12:02.628 13:06:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:12:02.628 13:06:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.628 13:06:08 -- common/autotest_common.sh@10 -- # set +x 00:12:02.628 13:06:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:12:02.628 13:06:09 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:12:02.628 13:06:09 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:12:02.628 13:06:09 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:12:02.628 13:06:09 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:12:02.628 13:06:09 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:12:02.628 13:06:09 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:02.628 13:06:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.628 13:06:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.628 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:12:02.628 ************************************ 00:12:02.628 START TEST nvmf_tcp 00:12:02.628 ************************************ 00:12:02.628 13:06:09 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:02.628 * Looking for test storage... 00:12:02.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:02.628 13:06:09 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.628 13:06:09 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.628 13:06:09 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.886 13:06:09 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:02.886 13:06:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:02.886 13:06:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.886 13:06:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.886 ************************************ 00:12:02.886 START TEST nvmf_target_core 00:12:02.886 ************************************ 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:02.886 * Looking for test storage... 00:12:02.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.886 --rc genhtml_branch_coverage=1 00:12:02.886 --rc genhtml_function_coverage=1 00:12:02.886 --rc genhtml_legend=1 00:12:02.886 --rc geninfo_all_blocks=1 00:12:02.886 --rc geninfo_unexecuted_blocks=1 00:12:02.886 00:12:02.886 ' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:02.886 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.887 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:02.887 ************************************ 00:12:02.887 START TEST nvmf_abort 00:12:02.887 ************************************ 00:12:02.887 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:03.145 * Looking for test storage... 00:12:03.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.145 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:03.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.146 --rc genhtml_branch_coverage=1 00:12:03.146 --rc genhtml_function_coverage=1 00:12:03.146 --rc genhtml_legend=1 00:12:03.146 --rc geninfo_all_blocks=1 00:12:03.146 --rc geninfo_unexecuted_blocks=1 00:12:03.146 00:12:03.146 ' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:03.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.146 --rc genhtml_branch_coverage=1 00:12:03.146 --rc genhtml_function_coverage=1 00:12:03.146 --rc genhtml_legend=1 00:12:03.146 --rc geninfo_all_blocks=1 00:12:03.146 --rc geninfo_unexecuted_blocks=1 00:12:03.146 00:12:03.146 ' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:03.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.146 --rc genhtml_branch_coverage=1 00:12:03.146 --rc genhtml_function_coverage=1 00:12:03.146 --rc genhtml_legend=1 00:12:03.146 --rc geninfo_all_blocks=1 00:12:03.146 --rc geninfo_unexecuted_blocks=1 00:12:03.146 00:12:03.146 ' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:03.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.146 --rc genhtml_branch_coverage=1 00:12:03.146 --rc genhtml_function_coverage=1 00:12:03.146 --rc genhtml_legend=1 00:12:03.146 --rc geninfo_all_blocks=1 00:12:03.146 --rc geninfo_unexecuted_blocks=1 00:12:03.146 00:12:03.146 ' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.146 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.146 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:03.147 Cannot find device "nvmf_init_br" 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:03.147 Cannot find device "nvmf_init_br2" 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:03.147 Cannot find device "nvmf_tgt_br" 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.147 Cannot find device "nvmf_tgt_br2" 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:03.147 Cannot find device "nvmf_init_br" 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:12:03.147 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:03.404 Cannot find device "nvmf_init_br2" 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:03.404 Cannot find device "nvmf_tgt_br" 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:03.404 Cannot find device "nvmf_tgt_br2" 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:03.404 Cannot find device "nvmf_br" 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:03.404 Cannot find device "nvmf_init_if" 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:03.404 Cannot find device "nvmf_init_if2" 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.404 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.662 13:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:03.662 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:03.662 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:03.662 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:03.662 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:03.662 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:03.662 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:03.662 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:03.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:03.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:12:03.919 00:12:03.919 --- 10.0.0.3 ping statistics --- 00:12:03.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.919 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:03.919 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:03.919 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:12:03.919 00:12:03.919 --- 10.0.0.4 ping statistics --- 00:12:03.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.919 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:03.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:12:03.919 00:12:03.919 --- 10.0.0.1 ping statistics --- 00:12:03.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.919 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:03.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:12:03.919 00:12:03.919 --- 10.0.0.2 ping statistics --- 00:12:03.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.919 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62408 00:12:03.919 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:03.920 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62408 00:12:03.920 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62408 ']' 00:12:03.920 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.920 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.920 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.920 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.920 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:03.920 [2024-12-06 13:06:10.329974] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:03.920 [2024-12-06 13:06:10.330084] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.177 [2024-12-06 13:06:10.481822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.177 [2024-12-06 13:06:10.523458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.177 [2024-12-06 13:06:10.523518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.177 [2024-12-06 13:06:10.523532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.177 [2024-12-06 13:06:10.523542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.177 [2024-12-06 13:06:10.523551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.177 [2024-12-06 13:06:10.524484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.177 [2024-12-06 13:06:10.526985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.177 [2024-12-06 13:06:10.527017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.177 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.178 [2024-12-06 13:06:10.691659] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.178 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.178 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:04.178 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.178 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.435 Malloc0 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 Delay0 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 [2024-12-06 13:06:10.755939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.436 13:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:04.436 [2024-12-06 13:06:10.941863] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:07.048 Initializing NVMe Controllers 00:12:07.048 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:12:07.048 controller IO queue size 128 less than required 00:12:07.048 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:07.048 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:07.048 Initialization complete. Launching workers. 00:12:07.048 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23841 00:12:07.048 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23902, failed to submit 62 00:12:07.048 success 23845, unsuccessful 57, failed 0 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.048 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.048 rmmod nvme_tcp 00:12:07.048 rmmod nvme_fabrics 00:12:07.048 rmmod nvme_keyring 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62408 ']' 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62408 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62408 ']' 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62408 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62408 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:07.048 killing process with pid 62408 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62408' 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62408 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62408 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.048 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:12:07.049 00:12:07.049 real 0m4.102s 00:12:07.049 user 0m10.116s 00:12:07.049 sys 0m1.091s 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:07.049 ************************************ 00:12:07.049 END TEST nvmf_abort 00:12:07.049 ************************************ 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:07.049 ************************************ 00:12:07.049 START TEST nvmf_ns_hotplug_stress 00:12:07.049 ************************************ 00:12:07.049 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:07.309 * Looking for test storage... 00:12:07.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.309 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.310 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:07.310 Cannot find device "nvmf_init_br" 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:07.310 Cannot find device "nvmf_init_br2" 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:07.310 Cannot find device "nvmf_tgt_br" 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.310 Cannot find device "nvmf_tgt_br2" 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:07.310 Cannot find device "nvmf_init_br" 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:12:07.310 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:07.310 Cannot find device "nvmf_init_br2" 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:07.569 Cannot find device "nvmf_tgt_br" 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:07.569 Cannot find device "nvmf_tgt_br2" 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:07.569 Cannot find device "nvmf_br" 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:07.569 Cannot find device "nvmf_init_if" 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:07.569 Cannot find device "nvmf_init_if2" 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:07.569 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:07.570 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:07.570 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:07.570 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:07.570 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:07.570 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:07.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:07.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.145 ms 00:12:07.829 00:12:07.829 --- 10.0.0.3 ping statistics --- 00:12:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.829 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:07.829 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:07.829 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:12:07.829 00:12:07.829 --- 10.0.0.4 ping statistics --- 00:12:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.829 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:07.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:07.829 00:12:07.829 --- 10.0.0.1 ping statistics --- 00:12:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.829 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:07.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:12:07.829 00:12:07.829 --- 10.0.0.2 ping statistics --- 00:12:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.829 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.829 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62690 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62690 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62690 ']' 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.830 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 [2024-12-06 13:06:14.226201] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:07.830 [2024-12-06 13:06:14.226294] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.089 [2024-12-06 13:06:14.386126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.089 [2024-12-06 13:06:14.434278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.089 [2024-12-06 13:06:14.434744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.089 [2024-12-06 13:06:14.434883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.089 [2024-12-06 13:06:14.435034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.089 [2024-12-06 13:06:14.435141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.089 [2024-12-06 13:06:14.436258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.089 [2024-12-06 13:06:14.437035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.089 [2024-12-06 13:06:14.437045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:08.089 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:08.347 [2024-12-06 13:06:14.844536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.605 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.862 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:09.120 [2024-12-06 13:06:15.457204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:09.120 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:09.377 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:09.635 Malloc0 00:12:09.635 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:09.893 Delay0 00:12:09.893 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.152 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:10.410 NULL1 00:12:10.669 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:10.952 13:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62814 00:12:10.952 13:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:10.952 13:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:10.952 13:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.327 Read completed with error (sct=0, sc=11) 00:12:12.327 13:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.327 13:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:12.327 13:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:12.893 true 00:12:12.893 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:12.893 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.459 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.717 13:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:13.717 13:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:14.283 true 00:12:14.284 13:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:14.284 13:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.850 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.108 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:15.108 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:15.366 true 00:12:15.366 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:15.366 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.377 13:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.377 13:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:16.377 13:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:16.635 true 00:12:16.635 13:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:16.635 13:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 13:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.534 13:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:18.534 13:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:18.791 true 00:12:19.047 13:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:19.047 13:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.612 13:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.870 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.870 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.870 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.870 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.870 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:19.870 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:20.127 true 00:12:20.127 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:20.127 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.063 Message suppressed 999 times: 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.063 Read completed with error (sct=0, sc=11) 00:12:21.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.327 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:21.327 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:21.585 true 00:12:21.585 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:21.585 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.843 13:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.101 13:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:22.101 13:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:22.359 true 00:12:22.359 13:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:22.359 13:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.313 13:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.571 13:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:23.571 13:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:23.831 true 00:12:23.831 13:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:23.831 13:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.091 13:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.349 13:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:24.349 13:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:24.607 true 00:12:24.607 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:24.607 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.865 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.123 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:25.123 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:25.382 true 00:12:25.382 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:25.382 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.641 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.900 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:25.900 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:26.159 true 00:12:26.418 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:26.418 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.376 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.376 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:27.376 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:27.634 true 00:12:27.891 13:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:27.891 13:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.148 13:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.406 13:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:28.406 13:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:28.664 true 00:12:28.664 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:28.664 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.921 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.179 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:29.179 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:29.436 true 00:12:29.693 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:29.693 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.950 13:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.208 13:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:30.208 13:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:30.465 true 00:12:30.465 13:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:30.465 13:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.400 13:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.400 13:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:31.400 13:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:31.967 true 00:12:31.967 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:31.967 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.225 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.483 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:32.483 13:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:32.742 true 00:12:32.742 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:32.742 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.000 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.259 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:33.259 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:33.517 true 00:12:33.517 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:33.517 13:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.084 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.342 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:34.342 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:34.599 true 00:12:34.599 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:34.599 13:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.858 13:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.115 13:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:35.115 13:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:35.375 true 00:12:35.375 13:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:35.375 13:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.940 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.940 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:35.940 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:36.198 true 00:12:36.198 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:36.198 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.189 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.755 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:37.755 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:38.014 true 00:12:38.014 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:38.014 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.386 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.644 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:39.644 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:39.900 true 00:12:39.900 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:39.900 13:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.831 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.831 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:40.831 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:41.393 true 00:12:41.393 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:41.393 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.958 Initializing NVMe Controllers 00:12:41.958 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:41.958 Controller IO queue size 128, less than required. 00:12:41.958 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:41.958 Controller IO queue size 128, less than required. 00:12:41.958 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:41.958 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:41.958 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:41.958 Initialization complete. Launching workers. 00:12:41.958 ======================================================== 00:12:41.958 Latency(us) 00:12:41.958 Device Information : IOPS MiB/s Average min max 00:12:41.958 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1974.10 0.96 34901.06 2879.60 1193176.10 00:12:41.958 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8970.63 4.38 14270.21 3217.38 593122.40 00:12:41.958 ======================================================== 00:12:41.958 Total : 10944.73 5.34 17991.39 2879.60 1193176.10 00:12:41.958 00:12:41.958 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.523 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:42.523 13:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:42.780 true 00:12:42.780 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62814 00:12:42.780 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62814) - No such process 00:12:42.780 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62814 00:12:42.780 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.037 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:43.602 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:43.602 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:43.602 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:43.602 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:43.602 13:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:43.602 null0 00:12:43.859 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:43.859 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:43.859 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:44.116 null1 00:12:44.116 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.116 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.116 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:44.680 null2 00:12:44.680 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.680 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.680 13:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:44.680 null3 00:12:44.937 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.937 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.937 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:45.195 null4 00:12:45.195 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:45.195 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:45.195 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:45.453 null5 00:12:45.453 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:45.453 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:45.453 13:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:45.709 null6 00:12:45.709 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:45.709 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:45.709 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:45.968 null7 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:45.968 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63814 63815 63817 63819 63820 63823 63825 63827 00:12:46.225 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.483 13:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.741 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:46.998 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.256 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.514 13:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:47.771 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:48.030 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:48.030 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.030 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.030 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.030 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.288 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:48.545 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:48.545 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.545 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.545 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:48.545 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.545 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.545 13:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:48.545 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.545 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.545 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:48.803 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.803 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:48.803 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:48.803 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.803 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.803 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:48.803 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:49.060 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.397 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:49.654 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.654 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.654 13:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:49.654 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.654 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.654 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.654 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:49.655 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:49.655 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.655 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.655 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:49.912 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.912 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:49.912 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:49.912 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.912 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.912 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.170 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.428 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:50.686 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.686 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.686 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:50.686 13:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.686 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:50.945 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:51.203 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:51.203 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:51.203 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:51.203 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:51.203 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:51.462 13:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.720 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:51.977 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:52.235 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:52.235 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:52.235 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.235 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.235 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.235 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:52.235 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:52.492 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.492 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.492 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:52.492 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.492 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:52.492 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.493 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.493 13:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.750 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.006 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:53.263 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.263 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.263 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:53.263 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.263 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.263 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:53.263 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.520 13:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:53.777 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:53.778 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.034 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:54.034 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.034 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.034 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:54.034 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.034 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.034 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.291 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:54.291 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:54.291 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.291 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.291 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.291 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.291 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:54.591 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.591 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.591 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.591 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.591 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:54.591 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.591 13:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.591 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.591 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.876 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.134 rmmod nvme_tcp 00:12:55.134 rmmod nvme_fabrics 00:12:55.134 rmmod nvme_keyring 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62690 ']' 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62690 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62690 ']' 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62690 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62690 00:12:55.134 killing process with pid 62690 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62690' 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62690 00:12:55.134 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62690 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.390 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:12:55.648 00:12:55.648 real 0m48.405s 00:12:55.648 user 4m5.255s 00:12:55.648 sys 0m14.010s 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.648 ************************************ 00:12:55.648 END TEST nvmf_ns_hotplug_stress 00:12:55.648 ************************************ 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:55.648 ************************************ 00:12:55.648 START TEST nvmf_delete_subsystem 00:12:55.648 ************************************ 00:12:55.648 13:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:55.649 * Looking for test storage... 00:12:55.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.649 --rc genhtml_branch_coverage=1 00:12:55.649 --rc genhtml_function_coverage=1 00:12:55.649 --rc genhtml_legend=1 00:12:55.649 --rc geninfo_all_blocks=1 00:12:55.649 --rc geninfo_unexecuted_blocks=1 00:12:55.649 00:12:55.649 ' 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.649 --rc genhtml_branch_coverage=1 00:12:55.649 --rc genhtml_function_coverage=1 00:12:55.649 --rc genhtml_legend=1 00:12:55.649 --rc geninfo_all_blocks=1 00:12:55.649 --rc geninfo_unexecuted_blocks=1 00:12:55.649 00:12:55.649 ' 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.649 --rc genhtml_branch_coverage=1 00:12:55.649 --rc genhtml_function_coverage=1 00:12:55.649 --rc genhtml_legend=1 00:12:55.649 --rc geninfo_all_blocks=1 00:12:55.649 --rc geninfo_unexecuted_blocks=1 00:12:55.649 00:12:55.649 ' 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.649 --rc genhtml_branch_coverage=1 00:12:55.649 --rc genhtml_function_coverage=1 00:12:55.649 --rc genhtml_legend=1 00:12:55.649 --rc geninfo_all_blocks=1 00:12:55.649 --rc geninfo_unexecuted_blocks=1 00:12:55.649 00:12:55.649 ' 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.649 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.907 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:55.907 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:55.908 Cannot find device "nvmf_init_br" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:55.908 Cannot find device "nvmf_init_br2" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:55.908 Cannot find device "nvmf_tgt_br" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.908 Cannot find device "nvmf_tgt_br2" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:55.908 Cannot find device "nvmf_init_br" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:55.908 Cannot find device "nvmf_init_br2" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:55.908 Cannot find device "nvmf_tgt_br" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:55.908 Cannot find device "nvmf_tgt_br2" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:55.908 Cannot find device "nvmf_br" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:55.908 Cannot find device "nvmf_init_if" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:55.908 Cannot find device "nvmf_init_if2" 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:55.908 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:56.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:12:56.167 00:12:56.167 --- 10.0.0.3 ping statistics --- 00:12:56.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.167 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:56.167 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:56.167 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:12:56.167 00:12:56.167 --- 10.0.0.4 ping statistics --- 00:12:56.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.167 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:56.167 00:12:56.167 --- 10.0.0.1 ping statistics --- 00:12:56.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.167 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:56.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:56.167 00:12:56.167 --- 10.0.0.2 ping statistics --- 00:12:56.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.167 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.167 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=65255 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 65255 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 65255 ']' 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.168 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.168 [2024-12-06 13:07:02.631465] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:56.168 [2024-12-06 13:07:02.631558] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.426 [2024-12-06 13:07:02.820678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:56.426 [2024-12-06 13:07:02.866790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.426 [2024-12-06 13:07:02.866899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.426 [2024-12-06 13:07:02.866925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.426 [2024-12-06 13:07:02.866968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.426 [2024-12-06 13:07:02.866993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.426 [2024-12-06 13:07:02.868379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.426 [2024-12-06 13:07:02.868407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.426 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.426 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:12:56.426 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.426 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.426 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 [2024-12-06 13:07:02.994011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.684 13:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 [2024-12-06 13:07:03.010138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 NULL1 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 Delay0 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65288 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:56.684 13:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:56.941 [2024-12-06 13:07:03.215363] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:58.843 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.843 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.843 13:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 Read completed with error (sct=0, sc=8) 00:12:58.843 starting I/O failed: -6 00:12:58.843 Write completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 Read completed with error (sct=0, sc=8) 00:12:58.844 Write completed with error (sct=0, sc=8) 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:58.844 starting I/O failed: -6 00:12:59.777 [2024-12-06 13:07:06.235371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a7aa0 is same with the state(6) to be set 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 [2024-12-06 13:07:06.250088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b5ea0 is same with the state(6) to be set 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 [2024-12-06 13:07:06.250364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b2a50 is same with the state(6) to be set 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 [2024-12-06 13:07:06.253997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffa8400d050 is same with the state(6) to be set 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.777 Read completed with error (sct=0, sc=8) 00:12:59.777 Write completed with error (sct=0, sc=8) 00:12:59.778 Read completed with error (sct=0, sc=8) 00:12:59.778 Read completed with error (sct=0, sc=8) 00:12:59.778 Read completed with error (sct=0, sc=8) 00:12:59.778 Read completed with error (sct=0, sc=8) 00:12:59.778 Read completed with error (sct=0, sc=8) 00:12:59.778 Read completed with error (sct=0, sc=8) 00:12:59.778 Read completed with error (sct=0, sc=8) 00:12:59.778 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.778 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:59.778 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65288 00:12:59.778 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:59.778 Write completed with error (sct=0, sc=8) 00:12:59.778 [2024-12-06 13:07:06.255987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffa8400d6b0 is same with the state(6) to be set 00:12:59.778 Initializing NVMe Controllers 00:12:59.778 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:59.778 Controller IO queue size 128, less than required. 00:12:59.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:59.778 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:59.778 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:59.778 Initialization complete. Launching workers. 00:12:59.778 ======================================================== 00:12:59.778 Latency(us) 00:12:59.778 Device Information : IOPS MiB/s Average min max 00:12:59.778 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.45 0.09 908685.97 815.09 1013584.25 00:12:59.778 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.94 0.09 907158.51 1013.88 1012869.21 00:12:59.778 ======================================================== 00:12:59.778 Total : 371.39 0.18 907921.22 815.09 1013584.25 00:12:59.778 00:12:59.778 [2024-12-06 13:07:06.256491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a7aa0 (9): Bad file descriptor 00:12:59.778 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65288 00:13:00.343 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65288) - No such process 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65288 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 65288 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 65288 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:00.343 [2024-12-06 13:07:06.775207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65339 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:00.343 13:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:00.600 [2024-12-06 13:07:06.959462] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:00.858 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:00.858 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:00.858 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.423 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:01.423 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:01.423 13:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.989 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:01.989 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:01.989 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:02.555 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:02.555 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:02.555 13:07:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:02.812 13:07:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:02.812 13:07:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:02.812 13:07:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.378 13:07:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.378 13:07:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:03.378 13:07:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.635 Initializing NVMe Controllers 00:13:03.635 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.635 Controller IO queue size 128, less than required. 00:13:03.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:03.635 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:03.635 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:03.635 Initialization complete. Launching workers. 00:13:03.635 ======================================================== 00:13:03.635 Latency(us) 00:13:03.635 Device Information : IOPS MiB/s Average min max 00:13:03.635 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003922.25 1000169.23 1011919.31 00:13:03.635 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006491.85 1000168.82 1020136.93 00:13:03.635 ======================================================== 00:13:03.635 Total : 256.00 0.12 1005207.05 1000168.82 1020136.93 00:13:03.635 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65339 00:13:03.893 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65339) - No such process 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65339 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:03.893 rmmod nvme_tcp 00:13:03.893 rmmod nvme_fabrics 00:13:03.893 rmmod nvme_keyring 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 65255 ']' 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 65255 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 65255 ']' 00:13:03.893 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 65255 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65255 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65255' 00:13:04.151 killing process with pid 65255 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 65255 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 65255 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:04.151 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:13:04.409 00:13:04.409 real 0m8.829s 00:13:04.409 user 0m27.079s 00:13:04.409 sys 0m1.613s 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:04.409 ************************************ 00:13:04.409 END TEST nvmf_delete_subsystem 00:13:04.409 ************************************ 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:04.409 ************************************ 00:13:04.409 START TEST nvmf_host_management 00:13:04.409 ************************************ 00:13:04.409 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:04.668 * Looking for test storage... 00:13:04.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:04.668 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:04.668 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:13:04.668 13:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.668 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:04.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.668 --rc genhtml_branch_coverage=1 00:13:04.668 --rc genhtml_function_coverage=1 00:13:04.668 --rc genhtml_legend=1 00:13:04.668 --rc geninfo_all_blocks=1 00:13:04.669 --rc geninfo_unexecuted_blocks=1 00:13:04.669 00:13:04.669 ' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:04.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.669 --rc genhtml_branch_coverage=1 00:13:04.669 --rc genhtml_function_coverage=1 00:13:04.669 --rc genhtml_legend=1 00:13:04.669 --rc geninfo_all_blocks=1 00:13:04.669 --rc geninfo_unexecuted_blocks=1 00:13:04.669 00:13:04.669 ' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:04.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.669 --rc genhtml_branch_coverage=1 00:13:04.669 --rc genhtml_function_coverage=1 00:13:04.669 --rc genhtml_legend=1 00:13:04.669 --rc geninfo_all_blocks=1 00:13:04.669 --rc geninfo_unexecuted_blocks=1 00:13:04.669 00:13:04.669 ' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:04.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.669 --rc genhtml_branch_coverage=1 00:13:04.669 --rc genhtml_function_coverage=1 00:13:04.669 --rc genhtml_legend=1 00:13:04.669 --rc geninfo_all_blocks=1 00:13:04.669 --rc geninfo_unexecuted_blocks=1 00:13:04.669 00:13:04.669 ' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:04.669 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:04.670 Cannot find device "nvmf_init_br" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:04.670 Cannot find device "nvmf_init_br2" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:04.670 Cannot find device "nvmf_tgt_br" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:04.670 Cannot find device "nvmf_tgt_br2" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:04.670 Cannot find device "nvmf_init_br" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:04.670 Cannot find device "nvmf_init_br2" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:04.670 Cannot find device "nvmf_tgt_br" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:04.670 Cannot find device "nvmf_tgt_br2" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:04.670 Cannot find device "nvmf_br" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:04.670 Cannot find device "nvmf_init_if" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:04.670 Cannot find device "nvmf_init_if2" 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:04.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:04.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:13:04.670 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:04.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:04.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:13:04.928 00:13:04.928 --- 10.0.0.3 ping statistics --- 00:13:04.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.928 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:04.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:04.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:13:04.928 00:13:04.928 --- 10.0.0.4 ping statistics --- 00:13:04.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.928 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:04.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:04.928 00:13:04.928 --- 10.0.0.1 ping statistics --- 00:13:04.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.928 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:04.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:13:04.928 00:13:04.928 --- 10.0.0.2 ping statistics --- 00:13:04.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.928 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.928 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65623 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65623 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65623 ']' 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.186 13:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:05.186 [2024-12-06 13:07:11.559515] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:05.186 [2024-12-06 13:07:11.559623] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.186 [2024-12-06 13:07:11.711256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.444 [2024-12-06 13:07:11.754712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.444 [2024-12-06 13:07:11.754780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.444 [2024-12-06 13:07:11.754795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.444 [2024-12-06 13:07:11.754805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.444 [2024-12-06 13:07:11.754814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.444 [2024-12-06 13:07:11.755688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.444 [2024-12-06 13:07:11.755796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.444 [2024-12-06 13:07:11.755899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:05.444 [2024-12-06 13:07:11.755905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.394 [2024-12-06 13:07:12.587126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.394 Malloc0 00:13:06.394 [2024-12-06 13:07:12.659900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65700 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65700 /var/tmp/bdevperf.sock 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65700 ']' 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:13:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:06.394 { 00:13:06.394 "params": { 00:13:06.394 "name": "Nvme$subsystem", 00:13:06.394 "trtype": "$TEST_TRANSPORT", 00:13:06.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:06.394 "adrfam": "ipv4", 00:13:06.394 "trsvcid": "$NVMF_PORT", 00:13:06.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:06.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:06.394 "hdgst": ${hdgst:-false}, 00:13:06.394 "ddgst": ${ddgst:-false} 00:13:06.394 }, 00:13:06.394 "method": "bdev_nvme_attach_controller" 00:13:06.394 } 00:13:06.394 EOF 00:13:06.394 )") 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:13:06.394 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:13:06.395 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:13:06.395 13:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:06.395 "params": { 00:13:06.395 "name": "Nvme0", 00:13:06.395 "trtype": "tcp", 00:13:06.395 "traddr": "10.0.0.3", 00:13:06.395 "adrfam": "ipv4", 00:13:06.395 "trsvcid": "4420", 00:13:06.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:06.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:06.395 "hdgst": false, 00:13:06.395 "ddgst": false 00:13:06.395 }, 00:13:06.395 "method": "bdev_nvme_attach_controller" 00:13:06.395 }' 00:13:06.395 [2024-12-06 13:07:12.775650] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:06.395 [2024-12-06 13:07:12.776330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65700 ] 00:13:06.653 [2024-12-06 13:07:12.920792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.653 [2024-12-06 13:07:12.953800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.653 Running I/O for 10 seconds... 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:06.653 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:06.912 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.912 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:13:06.912 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:06.912 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=408 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 408 -ge 100 ']' 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:07.170 [2024-12-06 13:07:13.528760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf24e20 is same with the state(6) to be set 00:13:07.170 [2024-12-06 13:07:13.528818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf24e20 is same with the state(6) to be set 00:13:07.170 [2024-12-06 13:07:13.528831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf24e20 is same with the state(6) to be set 00:13:07.170 [2024-12-06 13:07:13.531059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.170 [2024-12-06 13:07:13.531114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.531128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.170 [2024-12-06 13:07:13.531138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.531148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.170 [2024-12-06 13:07:13.531157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.531168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.170 [2024-12-06 13:07:13.531177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.531187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4c130 is same with the state(6) to be set 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.170 13:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:07.170 [2024-12-06 13:07:13.541739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4c130 (9): Bad file descriptor 00:13:07.170 [2024-12-06 13:07:13.541850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.170 [2024-12-06 13:07:13.541872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.541894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.170 [2024-12-06 13:07:13.541905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.541918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.170 [2024-12-06 13:07:13.541927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.541939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.170 [2024-12-06 13:07:13.541965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.541977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.170 [2024-12-06 13:07:13.541986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.541997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.170 [2024-12-06 13:07:13.542007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.170 [2024-12-06 13:07:13.542019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.170 [2024-12-06 13:07:13.542033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.171 [2024-12-06 13:07:13.542871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.171 [2024-12-06 13:07:13.542883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.542892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.542903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.542912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.542924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.542933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.542956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.542967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.542978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.542987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.543285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.172 [2024-12-06 13:07:13.543295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.172 [2024-12-06 13:07:13.544540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:13:07.172 task offset: 65536 on job bdev=Nvme0n1 fails 00:13:07.172 00:13:07.172 Latency(us) 00:13:07.172 [2024-12-06T13:07:13.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.172 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:07.172 Job: Nvme0n1 ended in about 0.45 seconds with error 00:13:07.172 Verification LBA range: start 0x0 length 0x400 00:13:07.172 Nvme0n1 : 0.45 1137.87 71.12 142.23 0.00 48395.47 1995.87 48377.48 00:13:07.172 [2024-12-06T13:07:13.700Z] =================================================================================================================== 00:13:07.172 [2024-12-06T13:07:13.700Z] Total : 1137.87 71.12 142.23 0.00 48395.47 1995.87 48377.48 00:13:07.172 [2024-12-06 13:07:13.546676] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:07.172 [2024-12-06 13:07:13.552862] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65700 00:13:08.105 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65700) - No such process 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:08.105 { 00:13:08.105 "params": { 00:13:08.105 "name": "Nvme$subsystem", 00:13:08.105 "trtype": "$TEST_TRANSPORT", 00:13:08.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.105 "adrfam": "ipv4", 00:13:08.105 "trsvcid": "$NVMF_PORT", 00:13:08.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.105 "hdgst": ${hdgst:-false}, 00:13:08.105 "ddgst": ${ddgst:-false} 00:13:08.105 }, 00:13:08.105 "method": "bdev_nvme_attach_controller" 00:13:08.105 } 00:13:08.105 EOF 00:13:08.105 )") 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:13:08.105 13:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:08.105 "params": { 00:13:08.105 "name": "Nvme0", 00:13:08.105 "trtype": "tcp", 00:13:08.105 "traddr": "10.0.0.3", 00:13:08.105 "adrfam": "ipv4", 00:13:08.105 "trsvcid": "4420", 00:13:08.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:08.105 "hdgst": false, 00:13:08.105 "ddgst": false 00:13:08.105 }, 00:13:08.105 "method": "bdev_nvme_attach_controller" 00:13:08.105 }' 00:13:08.105 [2024-12-06 13:07:14.619354] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:08.105 [2024-12-06 13:07:14.619443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65741 ] 00:13:08.368 [2024-12-06 13:07:14.763159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.368 [2024-12-06 13:07:14.799587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.627 Running I/O for 1 seconds... 00:13:09.582 1510.00 IOPS, 94.38 MiB/s 00:13:09.582 Latency(us) 00:13:09.582 [2024-12-06T13:07:16.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.582 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:09.582 Verification LBA range: start 0x0 length 0x400 00:13:09.582 Nvme0n1 : 1.04 1536.79 96.05 0.00 0.00 40823.65 5362.04 36223.53 00:13:09.582 [2024-12-06T13:07:16.110Z] =================================================================================================================== 00:13:09.582 [2024-12-06T13:07:16.110Z] Total : 1536.79 96.05 0.00 0.00 40823.65 5362.04 36223.53 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.840 rmmod nvme_tcp 00:13:09.840 rmmod nvme_fabrics 00:13:09.840 rmmod nvme_keyring 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65623 ']' 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65623 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65623 ']' 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65623 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65623 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65623' 00:13:09.840 killing process with pid 65623 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65623 00:13:09.840 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65623 00:13:10.099 [2024-12-06 13:07:16.392036] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:10.099 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:10.358 00:13:10.358 real 0m5.798s 00:13:10.358 user 0m21.241s 00:13:10.358 sys 0m1.275s 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 ************************************ 00:13:10.358 END TEST nvmf_host_management 00:13:10.358 ************************************ 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 ************************************ 00:13:10.358 START TEST nvmf_lvol 00:13:10.358 ************************************ 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:10.358 * Looking for test storage... 00:13:10.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.358 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:10.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.618 --rc genhtml_branch_coverage=1 00:13:10.618 --rc genhtml_function_coverage=1 00:13:10.618 --rc genhtml_legend=1 00:13:10.618 --rc geninfo_all_blocks=1 00:13:10.618 --rc geninfo_unexecuted_blocks=1 00:13:10.618 00:13:10.618 ' 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:10.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.618 --rc genhtml_branch_coverage=1 00:13:10.618 --rc genhtml_function_coverage=1 00:13:10.618 --rc genhtml_legend=1 00:13:10.618 --rc geninfo_all_blocks=1 00:13:10.618 --rc geninfo_unexecuted_blocks=1 00:13:10.618 00:13:10.618 ' 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:10.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.618 --rc genhtml_branch_coverage=1 00:13:10.618 --rc genhtml_function_coverage=1 00:13:10.618 --rc genhtml_legend=1 00:13:10.618 --rc geninfo_all_blocks=1 00:13:10.618 --rc geninfo_unexecuted_blocks=1 00:13:10.618 00:13:10.618 ' 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:10.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.618 --rc genhtml_branch_coverage=1 00:13:10.618 --rc genhtml_function_coverage=1 00:13:10.618 --rc genhtml_legend=1 00:13:10.618 --rc geninfo_all_blocks=1 00:13:10.618 --rc geninfo_unexecuted_blocks=1 00:13:10.618 00:13:10.618 ' 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.618 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.619 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:10.619 Cannot find device "nvmf_init_br" 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:10.619 Cannot find device "nvmf_init_br2" 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:10.619 Cannot find device "nvmf_tgt_br" 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.619 Cannot find device "nvmf_tgt_br2" 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:10.619 Cannot find device "nvmf_init_br" 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:10.619 Cannot find device "nvmf_init_br2" 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:13:10.619 13:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:10.619 Cannot find device "nvmf_tgt_br" 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:10.619 Cannot find device "nvmf_tgt_br2" 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:10.619 Cannot find device "nvmf_br" 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:10.619 Cannot find device "nvmf_init_if" 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:10.619 Cannot find device "nvmf_init_if2" 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:10.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:13:10.619 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:10.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:10.620 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:10.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:10.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:13:10.878 00:13:10.878 --- 10.0.0.3 ping statistics --- 00:13:10.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.878 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:10.878 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:10.878 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:13:10.878 00:13:10.878 --- 10.0.0.4 ping statistics --- 00:13:10.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.878 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:10.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:10.878 00:13:10.878 --- 10.0.0.1 ping statistics --- 00:13:10.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.878 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:10.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:13:10.878 00:13:10.878 --- 10.0.0.2 ping statistics --- 00:13:10.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.878 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:10.878 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66006 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66006 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 66006 ']' 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.879 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:10.879 [2024-12-06 13:07:17.361611] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:10.879 [2024-12-06 13:07:17.361699] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.136 [2024-12-06 13:07:17.516282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.136 [2024-12-06 13:07:17.554536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.136 [2024-12-06 13:07:17.554595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.136 [2024-12-06 13:07:17.554609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.136 [2024-12-06 13:07:17.554619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.136 [2024-12-06 13:07:17.554628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.136 [2024-12-06 13:07:17.555450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.136 [2024-12-06 13:07:17.555550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.136 [2024-12-06 13:07:17.555556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.136 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.136 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:13:11.136 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.136 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.136 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:11.393 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.393 13:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:11.650 [2024-12-06 13:07:18.014793] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.650 13:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:11.907 13:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:11.907 13:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:12.471 13:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:12.471 13:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:12.472 13:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:13.037 13:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8fe5a8fa-76df-4131-84e0-2c4d918475ad 00:13:13.037 13:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8fe5a8fa-76df-4131-84e0-2c4d918475ad lvol 20 00:13:13.295 13:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6440d82b-2ac3-446b-84e4-bacb6bf40bf0 00:13:13.295 13:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:13.552 13:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6440d82b-2ac3-446b-84e4-bacb6bf40bf0 00:13:14.118 13:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:14.376 [2024-12-06 13:07:20.663494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:14.376 13:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:14.635 13:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66146 00:13:14.635 13:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:14.635 13:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:15.569 13:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6440d82b-2ac3-446b-84e4-bacb6bf40bf0 MY_SNAPSHOT 00:13:16.174 13:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d815ffee-4282-4ea0-863f-ab8ed90f7a53 00:13:16.174 13:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6440d82b-2ac3-446b-84e4-bacb6bf40bf0 30 00:13:16.432 13:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d815ffee-4282-4ea0-863f-ab8ed90f7a53 MY_CLONE 00:13:16.691 13:07:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2c38e6f7-c456-41ee-9c98-761e3c63d719 00:13:16.691 13:07:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2c38e6f7-c456-41ee-9c98-761e3c63d719 00:13:17.657 13:07:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66146 00:13:25.776 Initializing NVMe Controllers 00:13:25.776 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:13:25.776 Controller IO queue size 128, less than required. 00:13:25.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:25.776 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:25.776 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:25.776 Initialization complete. Launching workers. 00:13:25.776 ======================================================== 00:13:25.776 Latency(us) 00:13:25.777 Device Information : IOPS MiB/s Average min max 00:13:25.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10233.30 39.97 12518.00 2016.16 60684.19 00:13:25.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9752.90 38.10 13134.59 3347.46 68649.80 00:13:25.777 ======================================================== 00:13:25.777 Total : 19986.20 78.07 12818.89 2016.16 68649.80 00:13:25.777 00:13:25.777 13:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:25.777 13:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6440d82b-2ac3-446b-84e4-bacb6bf40bf0 00:13:25.777 13:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fe5a8fa-76df-4131-84e0-2c4d918475ad 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:25.777 rmmod nvme_tcp 00:13:25.777 rmmod nvme_fabrics 00:13:25.777 rmmod nvme_keyring 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66006 ']' 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66006 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 66006 ']' 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 66006 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.777 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66006 00:13:26.045 killing process with pid 66006 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66006' 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 66006 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 66006 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:26.045 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:13:26.317 00:13:26.317 real 0m16.016s 00:13:26.317 user 1m6.815s 00:13:26.317 sys 0m3.794s 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:26.317 ************************************ 00:13:26.317 END TEST nvmf_lvol 00:13:26.317 ************************************ 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:26.317 ************************************ 00:13:26.317 START TEST nvmf_lvs_grow 00:13:26.317 ************************************ 00:13:26.317 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:26.576 * Looking for test storage... 00:13:26.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.576 --rc genhtml_branch_coverage=1 00:13:26.576 --rc genhtml_function_coverage=1 00:13:26.576 --rc genhtml_legend=1 00:13:26.576 --rc geninfo_all_blocks=1 00:13:26.576 --rc geninfo_unexecuted_blocks=1 00:13:26.576 00:13:26.576 ' 00:13:26.576 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.576 --rc genhtml_branch_coverage=1 00:13:26.576 --rc genhtml_function_coverage=1 00:13:26.576 --rc genhtml_legend=1 00:13:26.576 --rc geninfo_all_blocks=1 00:13:26.577 --rc geninfo_unexecuted_blocks=1 00:13:26.577 00:13:26.577 ' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.577 --rc genhtml_branch_coverage=1 00:13:26.577 --rc genhtml_function_coverage=1 00:13:26.577 --rc genhtml_legend=1 00:13:26.577 --rc geninfo_all_blocks=1 00:13:26.577 --rc geninfo_unexecuted_blocks=1 00:13:26.577 00:13:26.577 ' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.577 --rc genhtml_branch_coverage=1 00:13:26.577 --rc genhtml_function_coverage=1 00:13:26.577 --rc genhtml_legend=1 00:13:26.577 --rc geninfo_all_blocks=1 00:13:26.577 --rc geninfo_unexecuted_blocks=1 00:13:26.577 00:13:26.577 ' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.577 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.577 13:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:26.577 Cannot find device "nvmf_init_br" 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:26.577 Cannot find device "nvmf_init_br2" 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:26.577 Cannot find device "nvmf_tgt_br" 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.577 Cannot find device "nvmf_tgt_br2" 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:26.577 Cannot find device "nvmf_init_br" 00:13:26.577 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:13:26.578 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:26.578 Cannot find device "nvmf_init_br2" 00:13:26.578 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:13:26.578 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:26.578 Cannot find device "nvmf_tgt_br" 00:13:26.578 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:13:26.578 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:26.578 Cannot find device "nvmf_tgt_br2" 00:13:26.578 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:13:26.578 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:26.578 Cannot find device "nvmf_br" 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:26.836 Cannot find device "nvmf_init_if" 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:26.836 Cannot find device "nvmf_init_if2" 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:26.836 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:27.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:13:27.095 00:13:27.095 --- 10.0.0.3 ping statistics --- 00:13:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.095 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:27.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:27.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:13:27.095 00:13:27.095 --- 10.0.0.4 ping statistics --- 00:13:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.095 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:13:27.095 00:13:27.095 --- 10.0.0.1 ping statistics --- 00:13:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.095 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:27.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:27.095 00:13:27.095 --- 10.0.0.2 ping statistics --- 00:13:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.095 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66565 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66565 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66565 ']' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.095 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.095 [2024-12-06 13:07:33.536406] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:27.095 [2024-12-06 13:07:33.536511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.353 [2024-12-06 13:07:33.680026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.353 [2024-12-06 13:07:33.711491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.353 [2024-12-06 13:07:33.711545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.353 [2024-12-06 13:07:33.711557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.353 [2024-12-06 13:07:33.711565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.353 [2024-12-06 13:07:33.711572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.353 [2024-12-06 13:07:33.711880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.353 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.353 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:13:27.353 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.353 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.353 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.353 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.353 13:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:27.612 [2024-12-06 13:07:34.133131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.870 ************************************ 00:13:27.870 START TEST lvs_grow_clean 00:13:27.870 ************************************ 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:27.870 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:28.128 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:28.128 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:28.387 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:28.387 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:28.387 13:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:28.646 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:28.646 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:28.646 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 lvol 150 00:13:28.904 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6146f3de-ebf7-4fe3-beb8-5c65691e0d0a 00:13:28.904 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:28.904 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:29.469 [2024-12-06 13:07:35.703009] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:29.469 [2024-12-06 13:07:35.703091] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:29.469 true 00:13:29.469 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:29.469 13:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:29.727 13:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:29.727 13:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:29.986 13:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6146f3de-ebf7-4fe3-beb8-5c65691e0d0a 00:13:30.244 13:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:30.502 [2024-12-06 13:07:37.019684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:30.762 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66724 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66724 /var/tmp/bdevperf.sock 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66724 ']' 00:13:31.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.020 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:31.020 [2024-12-06 13:07:37.373787] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:31.020 [2024-12-06 13:07:37.374094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66724 ] 00:13:31.020 [2024-12-06 13:07:37.524709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.277 [2024-12-06 13:07:37.565253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.277 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.277 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:13:31.277 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:31.537 Nvme0n1 00:13:31.537 13:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:31.795 [ 00:13:31.795 { 00:13:31.795 "aliases": [ 00:13:31.795 "6146f3de-ebf7-4fe3-beb8-5c65691e0d0a" 00:13:31.795 ], 00:13:31.795 "assigned_rate_limits": { 00:13:31.795 "r_mbytes_per_sec": 0, 00:13:31.795 "rw_ios_per_sec": 0, 00:13:31.795 "rw_mbytes_per_sec": 0, 00:13:31.795 "w_mbytes_per_sec": 0 00:13:31.795 }, 00:13:31.795 "block_size": 4096, 00:13:31.795 "claimed": false, 00:13:31.795 "driver_specific": { 00:13:31.795 "mp_policy": "active_passive", 00:13:31.795 "nvme": [ 00:13:31.795 { 00:13:31.795 "ctrlr_data": { 00:13:31.795 "ana_reporting": false, 00:13:31.795 "cntlid": 1, 00:13:31.795 "firmware_revision": "25.01", 00:13:31.795 "model_number": "SPDK bdev Controller", 00:13:31.795 "multi_ctrlr": true, 00:13:31.795 "oacs": { 00:13:31.795 "firmware": 0, 00:13:31.795 "format": 0, 00:13:31.795 "ns_manage": 0, 00:13:31.795 "security": 0 00:13:31.795 }, 00:13:31.795 "serial_number": "SPDK0", 00:13:31.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:31.795 "vendor_id": "0x8086" 00:13:31.795 }, 00:13:31.795 "ns_data": { 00:13:31.795 "can_share": true, 00:13:31.795 "id": 1 00:13:31.795 }, 00:13:31.795 "trid": { 00:13:31.795 "adrfam": "IPv4", 00:13:31.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:31.795 "traddr": "10.0.0.3", 00:13:31.795 "trsvcid": "4420", 00:13:31.795 "trtype": "TCP" 00:13:31.795 }, 00:13:31.795 "vs": { 00:13:31.795 "nvme_version": "1.3" 00:13:31.795 } 00:13:31.795 } 00:13:31.795 ] 00:13:31.795 }, 00:13:31.795 "memory_domains": [ 00:13:31.795 { 00:13:31.795 "dma_device_id": "system", 00:13:31.795 "dma_device_type": 1 00:13:31.795 } 00:13:31.795 ], 00:13:31.795 "name": "Nvme0n1", 00:13:31.795 "num_blocks": 38912, 00:13:31.795 "numa_id": -1, 00:13:31.795 "product_name": "NVMe disk", 00:13:31.795 "supported_io_types": { 00:13:31.795 "abort": true, 00:13:31.795 "compare": true, 00:13:31.795 "compare_and_write": true, 00:13:31.795 "copy": true, 00:13:31.795 "flush": true, 00:13:31.795 "get_zone_info": false, 00:13:31.795 "nvme_admin": true, 00:13:31.795 "nvme_io": true, 00:13:31.795 "nvme_io_md": false, 00:13:31.795 "nvme_iov_md": false, 00:13:31.795 "read": true, 00:13:31.795 "reset": true, 00:13:31.795 "seek_data": false, 00:13:31.795 "seek_hole": false, 00:13:31.795 "unmap": true, 00:13:31.795 "write": true, 00:13:31.795 "write_zeroes": true, 00:13:31.795 "zcopy": false, 00:13:31.795 "zone_append": false, 00:13:31.795 "zone_management": false 00:13:31.795 }, 00:13:31.795 "uuid": "6146f3de-ebf7-4fe3-beb8-5c65691e0d0a", 00:13:31.795 "zoned": false 00:13:31.795 } 00:13:31.795 ] 00:13:32.052 13:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.053 13:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66759 00:13:32.053 13:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:32.053 Running I/O for 10 seconds... 00:13:32.989 Latency(us) 00:13:32.989 [2024-12-06T13:07:39.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.989 Nvme0n1 : 1.00 7927.00 30.96 0.00 0.00 0.00 0.00 0.00 00:13:32.989 [2024-12-06T13:07:39.517Z] =================================================================================================================== 00:13:32.989 [2024-12-06T13:07:39.517Z] Total : 7927.00 30.96 0.00 0.00 0.00 0.00 0.00 00:13:32.989 00:13:33.924 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:34.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.182 Nvme0n1 : 2.00 7938.00 31.01 0.00 0.00 0.00 0.00 0.00 00:13:34.182 [2024-12-06T13:07:40.710Z] =================================================================================================================== 00:13:34.182 [2024-12-06T13:07:40.710Z] Total : 7938.00 31.01 0.00 0.00 0.00 0.00 0.00 00:13:34.182 00:13:34.182 true 00:13:34.182 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:34.182 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:34.842 13:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:34.842 13:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:34.842 13:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66759 00:13:35.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.120 Nvme0n1 : 3.00 7900.67 30.86 0.00 0.00 0.00 0.00 0.00 00:13:35.120 [2024-12-06T13:07:41.648Z] =================================================================================================================== 00:13:35.120 [2024-12-06T13:07:41.648Z] Total : 7900.67 30.86 0.00 0.00 0.00 0.00 0.00 00:13:35.120 00:13:36.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.056 Nvme0n1 : 4.00 7768.25 30.34 0.00 0.00 0.00 0.00 0.00 00:13:36.056 [2024-12-06T13:07:42.584Z] =================================================================================================================== 00:13:36.056 [2024-12-06T13:07:42.584Z] Total : 7768.25 30.34 0.00 0.00 0.00 0.00 0.00 00:13:36.056 00:13:36.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.992 Nvme0n1 : 5.00 7768.80 30.35 0.00 0.00 0.00 0.00 0.00 00:13:36.992 [2024-12-06T13:07:43.520Z] =================================================================================================================== 00:13:36.992 [2024-12-06T13:07:43.520Z] Total : 7768.80 30.35 0.00 0.00 0.00 0.00 0.00 00:13:36.992 00:13:38.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.367 Nvme0n1 : 6.00 7753.17 30.29 0.00 0.00 0.00 0.00 0.00 00:13:38.367 [2024-12-06T13:07:44.895Z] =================================================================================================================== 00:13:38.367 [2024-12-06T13:07:44.895Z] Total : 7753.17 30.29 0.00 0.00 0.00 0.00 0.00 00:13:38.367 00:13:39.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.305 Nvme0n1 : 7.00 7734.86 30.21 0.00 0.00 0.00 0.00 0.00 00:13:39.305 [2024-12-06T13:07:45.833Z] =================================================================================================================== 00:13:39.305 [2024-12-06T13:07:45.833Z] Total : 7734.86 30.21 0.00 0.00 0.00 0.00 0.00 00:13:39.305 00:13:40.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.240 Nvme0n1 : 8.00 7721.12 30.16 0.00 0.00 0.00 0.00 0.00 00:13:40.240 [2024-12-06T13:07:46.768Z] =================================================================================================================== 00:13:40.240 [2024-12-06T13:07:46.768Z] Total : 7721.12 30.16 0.00 0.00 0.00 0.00 0.00 00:13:40.240 00:13:41.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.172 Nvme0n1 : 9.00 7652.67 29.89 0.00 0.00 0.00 0.00 0.00 00:13:41.172 [2024-12-06T13:07:47.700Z] =================================================================================================================== 00:13:41.172 [2024-12-06T13:07:47.700Z] Total : 7652.67 29.89 0.00 0.00 0.00 0.00 0.00 00:13:41.172 00:13:42.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.105 Nvme0n1 : 10.00 7620.20 29.77 0.00 0.00 0.00 0.00 0.00 00:13:42.105 [2024-12-06T13:07:48.633Z] =================================================================================================================== 00:13:42.105 [2024-12-06T13:07:48.633Z] Total : 7620.20 29.77 0.00 0.00 0.00 0.00 0.00 00:13:42.105 00:13:42.105 00:13:42.105 Latency(us) 00:13:42.105 [2024-12-06T13:07:48.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.105 Nvme0n1 : 10.02 7619.48 29.76 0.00 0.00 16786.93 5779.08 100567.97 00:13:42.105 [2024-12-06T13:07:48.633Z] =================================================================================================================== 00:13:42.105 [2024-12-06T13:07:48.633Z] Total : 7619.48 29.76 0.00 0.00 16786.93 5779.08 100567.97 00:13:42.105 { 00:13:42.105 "results": [ 00:13:42.105 { 00:13:42.105 "job": "Nvme0n1", 00:13:42.105 "core_mask": "0x2", 00:13:42.105 "workload": "randwrite", 00:13:42.105 "status": "finished", 00:13:42.105 "queue_depth": 128, 00:13:42.105 "io_size": 4096, 00:13:42.105 "runtime": 10.017746, 00:13:42.105 "iops": 7619.478473500925, 00:13:42.105 "mibps": 29.76358778711299, 00:13:42.105 "io_failed": 0, 00:13:42.105 "io_timeout": 0, 00:13:42.105 "avg_latency_us": 16786.933456689254, 00:13:42.105 "min_latency_us": 5779.083636363636, 00:13:42.105 "max_latency_us": 100567.9709090909 00:13:42.105 } 00:13:42.105 ], 00:13:42.105 "core_count": 1 00:13:42.105 } 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66724 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66724 ']' 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66724 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66724 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66724' 00:13:42.105 killing process with pid 66724 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66724 00:13:42.105 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.105 00:13:42.105 Latency(us) 00:13:42.105 [2024-12-06T13:07:48.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.105 [2024-12-06T13:07:48.633Z] =================================================================================================================== 00:13:42.105 [2024-12-06T13:07:48.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.105 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66724 00:13:42.363 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:42.621 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:42.878 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:42.878 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:43.136 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:43.136 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:43.136 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:43.394 [2024-12-06 13:07:49.918692] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:43.651 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:43.909 2024/12/06 13:07:50 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:10fd63e1-3158-42fd-8bf2-7c0df5ef83e1], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:13:43.909 request: 00:13:43.909 { 00:13:43.909 "method": "bdev_lvol_get_lvstores", 00:13:43.909 "params": { 00:13:43.909 "uuid": "10fd63e1-3158-42fd-8bf2-7c0df5ef83e1" 00:13:43.909 } 00:13:43.909 } 00:13:43.909 Got JSON-RPC error response 00:13:43.909 GoRPCClient: error on JSON-RPC call 00:13:43.909 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:13:43.909 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.909 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.909 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.909 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.166 aio_bdev 00:13:44.166 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6146f3de-ebf7-4fe3-beb8-5c65691e0d0a 00:13:44.166 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6146f3de-ebf7-4fe3-beb8-5c65691e0d0a 00:13:44.166 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.166 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:13:44.166 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.166 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.166 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:44.424 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6146f3de-ebf7-4fe3-beb8-5c65691e0d0a -t 2000 00:13:44.682 [ 00:13:44.682 { 00:13:44.682 "aliases": [ 00:13:44.682 "lvs/lvol" 00:13:44.682 ], 00:13:44.682 "assigned_rate_limits": { 00:13:44.682 "r_mbytes_per_sec": 0, 00:13:44.682 "rw_ios_per_sec": 0, 00:13:44.682 "rw_mbytes_per_sec": 0, 00:13:44.682 "w_mbytes_per_sec": 0 00:13:44.682 }, 00:13:44.682 "block_size": 4096, 00:13:44.682 "claimed": false, 00:13:44.682 "driver_specific": { 00:13:44.682 "lvol": { 00:13:44.682 "base_bdev": "aio_bdev", 00:13:44.682 "clone": false, 00:13:44.682 "esnap_clone": false, 00:13:44.682 "lvol_store_uuid": "10fd63e1-3158-42fd-8bf2-7c0df5ef83e1", 00:13:44.682 "num_allocated_clusters": 38, 00:13:44.682 "snapshot": false, 00:13:44.682 "thin_provision": false 00:13:44.682 } 00:13:44.682 }, 00:13:44.682 "name": "6146f3de-ebf7-4fe3-beb8-5c65691e0d0a", 00:13:44.682 "num_blocks": 38912, 00:13:44.682 "product_name": "Logical Volume", 00:13:44.682 "supported_io_types": { 00:13:44.682 "abort": false, 00:13:44.682 "compare": false, 00:13:44.682 "compare_and_write": false, 00:13:44.682 "copy": false, 00:13:44.682 "flush": false, 00:13:44.682 "get_zone_info": false, 00:13:44.682 "nvme_admin": false, 00:13:44.682 "nvme_io": false, 00:13:44.682 "nvme_io_md": false, 00:13:44.682 "nvme_iov_md": false, 00:13:44.682 "read": true, 00:13:44.682 "reset": true, 00:13:44.682 "seek_data": true, 00:13:44.682 "seek_hole": true, 00:13:44.682 "unmap": true, 00:13:44.682 "write": true, 00:13:44.682 "write_zeroes": true, 00:13:44.682 "zcopy": false, 00:13:44.682 "zone_append": false, 00:13:44.682 "zone_management": false 00:13:44.682 }, 00:13:44.682 "uuid": "6146f3de-ebf7-4fe3-beb8-5c65691e0d0a", 00:13:44.682 "zoned": false 00:13:44.682 } 00:13:44.682 ] 00:13:44.682 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:13:44.682 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:44.682 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:45.246 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:45.246 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:45.246 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:45.504 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:45.504 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6146f3de-ebf7-4fe3-beb8-5c65691e0d0a 00:13:45.762 13:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10fd63e1-3158-42fd-8bf2-7c0df5ef83e1 00:13:46.020 13:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:46.278 13:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:46.846 ************************************ 00:13:46.846 END TEST lvs_grow_clean 00:13:46.846 ************************************ 00:13:46.846 00:13:46.846 real 0m18.995s 00:13:46.846 user 0m18.363s 00:13:46.846 sys 0m2.161s 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:46.846 ************************************ 00:13:46.846 START TEST lvs_grow_dirty 00:13:46.846 ************************************ 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:46.846 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:47.105 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:47.105 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:47.363 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:13:47.363 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:47.363 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:13:47.929 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:47.929 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:47.929 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 lvol 150 00:13:48.189 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=23a475e8-e280-4e0e-8b53-3c5bea654493 00:13:48.189 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:48.189 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:48.449 [2024-12-06 13:07:54.884058] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:48.449 [2024-12-06 13:07:54.884147] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:48.449 true 00:13:48.449 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:48.449 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:13:49.016 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:49.016 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:49.274 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 23a475e8-e280-4e0e-8b53-3c5bea654493 00:13:49.531 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:49.876 [2024-12-06 13:07:56.176847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:49.876 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67169 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67169 /var/tmp/bdevperf.sock 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67169 ']' 00:13:50.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.148 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:50.148 [2024-12-06 13:07:56.589661] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:50.148 [2024-12-06 13:07:56.589775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67169 ] 00:13:50.423 [2024-12-06 13:07:56.738425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.423 [2024-12-06 13:07:56.779594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.423 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.423 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:50.423 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:50.989 Nvme0n1 00:13:50.989 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:51.247 [ 00:13:51.247 { 00:13:51.247 "aliases": [ 00:13:51.247 "23a475e8-e280-4e0e-8b53-3c5bea654493" 00:13:51.247 ], 00:13:51.247 "assigned_rate_limits": { 00:13:51.247 "r_mbytes_per_sec": 0, 00:13:51.247 "rw_ios_per_sec": 0, 00:13:51.247 "rw_mbytes_per_sec": 0, 00:13:51.247 "w_mbytes_per_sec": 0 00:13:51.247 }, 00:13:51.247 "block_size": 4096, 00:13:51.247 "claimed": false, 00:13:51.247 "driver_specific": { 00:13:51.247 "mp_policy": "active_passive", 00:13:51.247 "nvme": [ 00:13:51.247 { 00:13:51.247 "ctrlr_data": { 00:13:51.247 "ana_reporting": false, 00:13:51.247 "cntlid": 1, 00:13:51.247 "firmware_revision": "25.01", 00:13:51.247 "model_number": "SPDK bdev Controller", 00:13:51.247 "multi_ctrlr": true, 00:13:51.247 "oacs": { 00:13:51.247 "firmware": 0, 00:13:51.247 "format": 0, 00:13:51.247 "ns_manage": 0, 00:13:51.247 "security": 0 00:13:51.247 }, 00:13:51.247 "serial_number": "SPDK0", 00:13:51.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:51.247 "vendor_id": "0x8086" 00:13:51.247 }, 00:13:51.247 "ns_data": { 00:13:51.247 "can_share": true, 00:13:51.247 "id": 1 00:13:51.247 }, 00:13:51.247 "trid": { 00:13:51.247 "adrfam": "IPv4", 00:13:51.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:51.247 "traddr": "10.0.0.3", 00:13:51.247 "trsvcid": "4420", 00:13:51.247 "trtype": "TCP" 00:13:51.247 }, 00:13:51.247 "vs": { 00:13:51.247 "nvme_version": "1.3" 00:13:51.247 } 00:13:51.247 } 00:13:51.247 ] 00:13:51.247 }, 00:13:51.247 "memory_domains": [ 00:13:51.247 { 00:13:51.247 "dma_device_id": "system", 00:13:51.247 "dma_device_type": 1 00:13:51.247 } 00:13:51.247 ], 00:13:51.247 "name": "Nvme0n1", 00:13:51.247 "num_blocks": 38912, 00:13:51.247 "numa_id": -1, 00:13:51.247 "product_name": "NVMe disk", 00:13:51.247 "supported_io_types": { 00:13:51.247 "abort": true, 00:13:51.247 "compare": true, 00:13:51.247 "compare_and_write": true, 00:13:51.247 "copy": true, 00:13:51.247 "flush": true, 00:13:51.247 "get_zone_info": false, 00:13:51.247 "nvme_admin": true, 00:13:51.247 "nvme_io": true, 00:13:51.247 "nvme_io_md": false, 00:13:51.247 "nvme_iov_md": false, 00:13:51.247 "read": true, 00:13:51.247 "reset": true, 00:13:51.247 "seek_data": false, 00:13:51.247 "seek_hole": false, 00:13:51.247 "unmap": true, 00:13:51.247 "write": true, 00:13:51.247 "write_zeroes": true, 00:13:51.247 "zcopy": false, 00:13:51.247 "zone_append": false, 00:13:51.247 "zone_management": false 00:13:51.247 }, 00:13:51.247 "uuid": "23a475e8-e280-4e0e-8b53-3c5bea654493", 00:13:51.247 "zoned": false 00:13:51.247 } 00:13:51.247 ] 00:13:51.247 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67204 00:13:51.247 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:51.247 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:51.247 Running I/O for 10 seconds... 00:13:52.622 Latency(us) 00:13:52.622 [2024-12-06T13:07:59.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.623 Nvme0n1 : 1.00 7770.00 30.35 0.00 0.00 0.00 0.00 0.00 00:13:52.623 [2024-12-06T13:07:59.151Z] =================================================================================================================== 00:13:52.623 [2024-12-06T13:07:59.151Z] Total : 7770.00 30.35 0.00 0.00 0.00 0.00 0.00 00:13:52.623 00:13:53.218 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:13:53.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.475 Nvme0n1 : 2.00 7680.50 30.00 0.00 0.00 0.00 0.00 0.00 00:13:53.475 [2024-12-06T13:08:00.003Z] =================================================================================================================== 00:13:53.475 [2024-12-06T13:08:00.003Z] Total : 7680.50 30.00 0.00 0.00 0.00 0.00 0.00 00:13:53.475 00:13:53.475 true 00:13:53.732 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:13:53.732 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:53.990 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:53.990 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:53.990 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67204 00:13:54.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.247 Nvme0n1 : 3.00 7553.33 29.51 0.00 0.00 0.00 0.00 0.00 00:13:54.247 [2024-12-06T13:08:00.775Z] =================================================================================================================== 00:13:54.247 [2024-12-06T13:08:00.775Z] Total : 7553.33 29.51 0.00 0.00 0.00 0.00 0.00 00:13:54.247 00:13:55.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.618 Nvme0n1 : 4.00 7494.50 29.28 0.00 0.00 0.00 0.00 0.00 00:13:55.618 [2024-12-06T13:08:02.146Z] =================================================================================================================== 00:13:55.618 [2024-12-06T13:08:02.146Z] Total : 7494.50 29.28 0.00 0.00 0.00 0.00 0.00 00:13:55.618 00:13:56.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.550 Nvme0n1 : 5.00 7427.00 29.01 0.00 0.00 0.00 0.00 0.00 00:13:56.550 [2024-12-06T13:08:03.078Z] =================================================================================================================== 00:13:56.550 [2024-12-06T13:08:03.078Z] Total : 7427.00 29.01 0.00 0.00 0.00 0.00 0.00 00:13:56.550 00:13:57.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.485 Nvme0n1 : 6.00 7056.83 27.57 0.00 0.00 0.00 0.00 0.00 00:13:57.485 [2024-12-06T13:08:04.013Z] =================================================================================================================== 00:13:57.485 [2024-12-06T13:08:04.013Z] Total : 7056.83 27.57 0.00 0.00 0.00 0.00 0.00 00:13:57.485 00:13:58.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.433 Nvme0n1 : 7.00 7043.86 27.52 0.00 0.00 0.00 0.00 0.00 00:13:58.433 [2024-12-06T13:08:04.961Z] =================================================================================================================== 00:13:58.433 [2024-12-06T13:08:04.961Z] Total : 7043.86 27.52 0.00 0.00 0.00 0.00 0.00 00:13:58.433 00:13:59.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.385 Nvme0n1 : 8.00 7067.38 27.61 0.00 0.00 0.00 0.00 0.00 00:13:59.385 [2024-12-06T13:08:05.913Z] =================================================================================================================== 00:13:59.385 [2024-12-06T13:08:05.913Z] Total : 7067.38 27.61 0.00 0.00 0.00 0.00 0.00 00:13:59.385 00:14:00.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.321 Nvme0n1 : 9.00 7049.33 27.54 0.00 0.00 0.00 0.00 0.00 00:14:00.321 [2024-12-06T13:08:06.849Z] =================================================================================================================== 00:14:00.321 [2024-12-06T13:08:06.849Z] Total : 7049.33 27.54 0.00 0.00 0.00 0.00 0.00 00:14:00.321 00:14:01.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.257 Nvme0n1 : 10.00 7068.30 27.61 0.00 0.00 0.00 0.00 0.00 00:14:01.257 [2024-12-06T13:08:07.785Z] =================================================================================================================== 00:14:01.257 [2024-12-06T13:08:07.785Z] Total : 7068.30 27.61 0.00 0.00 0.00 0.00 0.00 00:14:01.257 00:14:01.257 00:14:01.257 Latency(us) 00:14:01.257 [2024-12-06T13:08:07.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.257 Nvme0n1 : 10.01 7068.17 27.61 0.00 0.00 18096.79 3351.27 274536.26 00:14:01.257 [2024-12-06T13:08:07.785Z] =================================================================================================================== 00:14:01.257 [2024-12-06T13:08:07.785Z] Total : 7068.17 27.61 0.00 0.00 18096.79 3351.27 274536.26 00:14:01.257 { 00:14:01.257 "results": [ 00:14:01.257 { 00:14:01.257 "job": "Nvme0n1", 00:14:01.257 "core_mask": "0x2", 00:14:01.257 "workload": "randwrite", 00:14:01.257 "status": "finished", 00:14:01.257 "queue_depth": 128, 00:14:01.257 "io_size": 4096, 00:14:01.257 "runtime": 10.009527, 00:14:01.257 "iops": 7068.166158101177, 00:14:01.257 "mibps": 27.61002405508272, 00:14:01.257 "io_failed": 0, 00:14:01.257 "io_timeout": 0, 00:14:01.257 "avg_latency_us": 18096.785569265998, 00:14:01.257 "min_latency_us": 3351.2727272727275, 00:14:01.257 "max_latency_us": 274536.26181818184 00:14:01.257 } 00:14:01.257 ], 00:14:01.257 "core_count": 1 00:14:01.257 } 00:14:01.257 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67169 00:14:01.257 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 67169 ']' 00:14:01.257 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 67169 00:14:01.257 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:14:01.257 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.516 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67169 00:14:01.516 killing process with pid 67169 00:14:01.516 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.516 00:14:01.516 Latency(us) 00:14:01.516 [2024-12-06T13:08:08.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.516 [2024-12-06T13:08:08.044Z] =================================================================================================================== 00:14:01.516 [2024-12-06T13:08:08.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.516 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:01.516 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:01.516 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67169' 00:14:01.516 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 67169 00:14:01.516 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 67169 00:14:01.516 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:01.775 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:02.034 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:02.034 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66565 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66565 00:14:02.601 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66565 Killed "${NVMF_APP[@]}" "$@" 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67377 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67377 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67377 ']' 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.601 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.601 [2024-12-06 13:08:08.984891] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:02.601 [2024-12-06 13:08:08.985305] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.881 [2024-12-06 13:08:09.141582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.881 [2024-12-06 13:08:09.175185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.881 [2024-12-06 13:08:09.175244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.881 [2024-12-06 13:08:09.175257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.881 [2024-12-06 13:08:09.175265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.881 [2024-12-06 13:08:09.175272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.881 [2024-12-06 13:08:09.175576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.881 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.881 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:14:02.881 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.881 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.881 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.881 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.881 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:03.140 [2024-12-06 13:08:09.664952] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:03.140 [2024-12-06 13:08:09.665261] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:03.398 [2024-12-06 13:08:09.665472] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 23a475e8-e280-4e0e-8b53-3c5bea654493 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=23a475e8-e280-4e0e-8b53-3c5bea654493 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.398 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:03.657 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23a475e8-e280-4e0e-8b53-3c5bea654493 -t 2000 00:14:03.916 [ 00:14:03.916 { 00:14:03.916 "aliases": [ 00:14:03.916 "lvs/lvol" 00:14:03.916 ], 00:14:03.916 "assigned_rate_limits": { 00:14:03.916 "r_mbytes_per_sec": 0, 00:14:03.916 "rw_ios_per_sec": 0, 00:14:03.916 "rw_mbytes_per_sec": 0, 00:14:03.916 "w_mbytes_per_sec": 0 00:14:03.916 }, 00:14:03.916 "block_size": 4096, 00:14:03.916 "claimed": false, 00:14:03.916 "driver_specific": { 00:14:03.916 "lvol": { 00:14:03.916 "base_bdev": "aio_bdev", 00:14:03.916 "clone": false, 00:14:03.916 "esnap_clone": false, 00:14:03.916 "lvol_store_uuid": "f06ffda8-f395-47d1-84a2-482a5ea8cdc3", 00:14:03.916 "num_allocated_clusters": 38, 00:14:03.916 "snapshot": false, 00:14:03.916 "thin_provision": false 00:14:03.916 } 00:14:03.916 }, 00:14:03.916 "name": "23a475e8-e280-4e0e-8b53-3c5bea654493", 00:14:03.916 "num_blocks": 38912, 00:14:03.916 "product_name": "Logical Volume", 00:14:03.916 "supported_io_types": { 00:14:03.916 "abort": false, 00:14:03.916 "compare": false, 00:14:03.916 "compare_and_write": false, 00:14:03.916 "copy": false, 00:14:03.916 "flush": false, 00:14:03.916 "get_zone_info": false, 00:14:03.916 "nvme_admin": false, 00:14:03.916 "nvme_io": false, 00:14:03.916 "nvme_io_md": false, 00:14:03.916 "nvme_iov_md": false, 00:14:03.916 "read": true, 00:14:03.916 "reset": true, 00:14:03.916 "seek_data": true, 00:14:03.916 "seek_hole": true, 00:14:03.916 "unmap": true, 00:14:03.917 "write": true, 00:14:03.917 "write_zeroes": true, 00:14:03.917 "zcopy": false, 00:14:03.917 "zone_append": false, 00:14:03.917 "zone_management": false 00:14:03.917 }, 00:14:03.917 "uuid": "23a475e8-e280-4e0e-8b53-3c5bea654493", 00:14:03.917 "zoned": false 00:14:03.917 } 00:14:03.917 ] 00:14:03.917 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:14:03.917 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:03.917 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:04.175 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:04.175 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:04.175 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:04.741 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:04.741 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:04.741 [2024-12-06 13:08:11.234462] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:04.999 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:05.000 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:05.258 2024/12/06 13:08:11 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f06ffda8-f395-47d1-84a2-482a5ea8cdc3], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:05.258 request: 00:14:05.258 { 00:14:05.258 "method": "bdev_lvol_get_lvstores", 00:14:05.258 "params": { 00:14:05.258 "uuid": "f06ffda8-f395-47d1-84a2-482a5ea8cdc3" 00:14:05.258 } 00:14:05.258 } 00:14:05.258 Got JSON-RPC error response 00:14:05.258 GoRPCClient: error on JSON-RPC call 00:14:05.258 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:14:05.258 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.258 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.258 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.258 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:05.516 aio_bdev 00:14:05.516 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 23a475e8-e280-4e0e-8b53-3c5bea654493 00:14:05.516 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=23a475e8-e280-4e0e-8b53-3c5bea654493 00:14:05.516 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.516 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:14:05.516 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.516 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.516 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:05.774 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23a475e8-e280-4e0e-8b53-3c5bea654493 -t 2000 00:14:06.035 [ 00:14:06.035 { 00:14:06.035 "aliases": [ 00:14:06.035 "lvs/lvol" 00:14:06.035 ], 00:14:06.035 "assigned_rate_limits": { 00:14:06.035 "r_mbytes_per_sec": 0, 00:14:06.035 "rw_ios_per_sec": 0, 00:14:06.035 "rw_mbytes_per_sec": 0, 00:14:06.035 "w_mbytes_per_sec": 0 00:14:06.035 }, 00:14:06.035 "block_size": 4096, 00:14:06.035 "claimed": false, 00:14:06.035 "driver_specific": { 00:14:06.035 "lvol": { 00:14:06.035 "base_bdev": "aio_bdev", 00:14:06.035 "clone": false, 00:14:06.035 "esnap_clone": false, 00:14:06.035 "lvol_store_uuid": "f06ffda8-f395-47d1-84a2-482a5ea8cdc3", 00:14:06.035 "num_allocated_clusters": 38, 00:14:06.035 "snapshot": false, 00:14:06.035 "thin_provision": false 00:14:06.035 } 00:14:06.035 }, 00:14:06.035 "name": "23a475e8-e280-4e0e-8b53-3c5bea654493", 00:14:06.035 "num_blocks": 38912, 00:14:06.035 "product_name": "Logical Volume", 00:14:06.035 "supported_io_types": { 00:14:06.035 "abort": false, 00:14:06.035 "compare": false, 00:14:06.035 "compare_and_write": false, 00:14:06.035 "copy": false, 00:14:06.035 "flush": false, 00:14:06.035 "get_zone_info": false, 00:14:06.035 "nvme_admin": false, 00:14:06.035 "nvme_io": false, 00:14:06.035 "nvme_io_md": false, 00:14:06.035 "nvme_iov_md": false, 00:14:06.035 "read": true, 00:14:06.035 "reset": true, 00:14:06.035 "seek_data": true, 00:14:06.035 "seek_hole": true, 00:14:06.035 "unmap": true, 00:14:06.035 "write": true, 00:14:06.035 "write_zeroes": true, 00:14:06.035 "zcopy": false, 00:14:06.035 "zone_append": false, 00:14:06.035 "zone_management": false 00:14:06.035 }, 00:14:06.035 "uuid": "23a475e8-e280-4e0e-8b53-3c5bea654493", 00:14:06.035 "zoned": false 00:14:06.035 } 00:14:06.035 ] 00:14:06.035 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:14:06.035 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:06.035 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:06.293 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:06.293 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:06.294 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:06.551 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:06.551 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 23a475e8-e280-4e0e-8b53-3c5bea654493 00:14:06.809 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f06ffda8-f395-47d1-84a2-482a5ea8cdc3 00:14:07.375 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:07.375 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:07.941 ************************************ 00:14:07.941 END TEST lvs_grow_dirty 00:14:07.941 ************************************ 00:14:07.941 00:14:07.941 real 0m21.094s 00:14:07.941 user 0m44.237s 00:14:07.941 sys 0m7.809s 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:07.941 nvmf_trace.0 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.941 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:14:08.198 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:08.198 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:14:08.198 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:08.198 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:08.198 rmmod nvme_tcp 00:14:08.455 rmmod nvme_fabrics 00:14:08.455 rmmod nvme_keyring 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67377 ']' 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67377 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67377 ']' 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67377 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67377 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.455 killing process with pid 67377 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67377' 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67377 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67377 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:08.455 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:08.713 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:14:08.713 ************************************ 00:14:08.713 END TEST nvmf_lvs_grow 00:14:08.713 ************************************ 00:14:08.713 00:14:08.713 real 0m42.427s 00:14:08.713 user 1m9.223s 00:14:08.713 sys 0m10.958s 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.713 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:08.971 ************************************ 00:14:08.971 START TEST nvmf_bdev_io_wait 00:14:08.971 ************************************ 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:08.971 * Looking for test storage... 00:14:08.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.971 --rc genhtml_branch_coverage=1 00:14:08.971 --rc genhtml_function_coverage=1 00:14:08.971 --rc genhtml_legend=1 00:14:08.971 --rc geninfo_all_blocks=1 00:14:08.971 --rc geninfo_unexecuted_blocks=1 00:14:08.971 00:14:08.971 ' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.971 --rc genhtml_branch_coverage=1 00:14:08.971 --rc genhtml_function_coverage=1 00:14:08.971 --rc genhtml_legend=1 00:14:08.971 --rc geninfo_all_blocks=1 00:14:08.971 --rc geninfo_unexecuted_blocks=1 00:14:08.971 00:14:08.971 ' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.971 --rc genhtml_branch_coverage=1 00:14:08.971 --rc genhtml_function_coverage=1 00:14:08.971 --rc genhtml_legend=1 00:14:08.971 --rc geninfo_all_blocks=1 00:14:08.971 --rc geninfo_unexecuted_blocks=1 00:14:08.971 00:14:08.971 ' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.971 --rc genhtml_branch_coverage=1 00:14:08.971 --rc genhtml_function_coverage=1 00:14:08.971 --rc genhtml_legend=1 00:14:08.971 --rc geninfo_all_blocks=1 00:14:08.971 --rc geninfo_unexecuted_blocks=1 00:14:08.971 00:14:08.971 ' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.971 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.972 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:08.972 Cannot find device "nvmf_init_br" 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:14:08.972 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:09.229 Cannot find device "nvmf_init_br2" 00:14:09.229 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:14:09.229 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:09.229 Cannot find device "nvmf_tgt_br" 00:14:09.229 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:14:09.229 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.230 Cannot find device "nvmf_tgt_br2" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:09.230 Cannot find device "nvmf_init_br" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:09.230 Cannot find device "nvmf_init_br2" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:09.230 Cannot find device "nvmf_tgt_br" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:09.230 Cannot find device "nvmf_tgt_br2" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:09.230 Cannot find device "nvmf_br" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:09.230 Cannot find device "nvmf_init_if" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:09.230 Cannot find device "nvmf_init_if2" 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:09.230 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:09.487 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:09.488 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:09.488 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:14:09.488 00:14:09.488 --- 10.0.0.3 ping statistics --- 00:14:09.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.488 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:09.488 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:09.488 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:14:09.488 00:14:09.488 --- 10.0.0.4 ping statistics --- 00:14:09.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.488 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:09.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:09.488 00:14:09.488 --- 10.0.0.1 ping statistics --- 00:14:09.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.488 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:09.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:09.488 00:14:09.488 --- 10.0.0.2 ping statistics --- 00:14:09.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.488 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67833 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67833 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67833 ']' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.488 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:09.488 [2024-12-06 13:08:15.948628] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:09.488 [2024-12-06 13:08:15.948707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.745 [2024-12-06 13:08:16.095670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.745 [2024-12-06 13:08:16.128533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.745 [2024-12-06 13:08:16.128603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.745 [2024-12-06 13:08:16.128630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.745 [2024-12-06 13:08:16.128639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.745 [2024-12-06 13:08:16.128646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.745 [2024-12-06 13:08:16.129491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.745 [2024-12-06 13:08:16.129545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.745 [2024-12-06 13:08:16.129631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.745 [2024-12-06 13:08:16.129635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.745 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.745 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:14:09.745 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.745 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:09.745 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 [2024-12-06 13:08:16.339275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 Malloc0 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.010 [2024-12-06 13:08:16.389708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67878 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67880 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:10.010 { 00:14:10.010 "params": { 00:14:10.010 "name": "Nvme$subsystem", 00:14:10.010 "trtype": "$TEST_TRANSPORT", 00:14:10.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.010 "adrfam": "ipv4", 00:14:10.010 "trsvcid": "$NVMF_PORT", 00:14:10.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.010 "hdgst": ${hdgst:-false}, 00:14:10.010 "ddgst": ${ddgst:-false} 00:14:10.010 }, 00:14:10.010 "method": "bdev_nvme_attach_controller" 00:14:10.010 } 00:14:10.010 EOF 00:14:10.010 )") 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67882 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:10.010 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:10.010 { 00:14:10.011 "params": { 00:14:10.011 "name": "Nvme$subsystem", 00:14:10.011 "trtype": "$TEST_TRANSPORT", 00:14:10.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.011 "adrfam": "ipv4", 00:14:10.011 "trsvcid": "$NVMF_PORT", 00:14:10.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.011 "hdgst": ${hdgst:-false}, 00:14:10.011 "ddgst": ${ddgst:-false} 00:14:10.011 }, 00:14:10.011 "method": "bdev_nvme_attach_controller" 00:14:10.011 } 00:14:10.011 EOF 00:14:10.011 )") 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67885 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:10.011 { 00:14:10.011 "params": { 00:14:10.011 "name": "Nvme$subsystem", 00:14:10.011 "trtype": "$TEST_TRANSPORT", 00:14:10.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.011 "adrfam": "ipv4", 00:14:10.011 "trsvcid": "$NVMF_PORT", 00:14:10.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.011 "hdgst": ${hdgst:-false}, 00:14:10.011 "ddgst": ${ddgst:-false} 00:14:10.011 }, 00:14:10.011 "method": "bdev_nvme_attach_controller" 00:14:10.011 } 00:14:10.011 EOF 00:14:10.011 )") 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:10.011 "params": { 00:14:10.011 "name": "Nvme1", 00:14:10.011 "trtype": "tcp", 00:14:10.011 "traddr": "10.0.0.3", 00:14:10.011 "adrfam": "ipv4", 00:14:10.011 "trsvcid": "4420", 00:14:10.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.011 "hdgst": false, 00:14:10.011 "ddgst": false 00:14:10.011 }, 00:14:10.011 "method": "bdev_nvme_attach_controller" 00:14:10.011 }' 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:10.011 "params": { 00:14:10.011 "name": "Nvme1", 00:14:10.011 "trtype": "tcp", 00:14:10.011 "traddr": "10.0.0.3", 00:14:10.011 "adrfam": "ipv4", 00:14:10.011 "trsvcid": "4420", 00:14:10.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.011 "hdgst": false, 00:14:10.011 "ddgst": false 00:14:10.011 }, 00:14:10.011 "method": "bdev_nvme_attach_controller" 00:14:10.011 }' 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:10.011 "params": { 00:14:10.011 "name": "Nvme1", 00:14:10.011 "trtype": "tcp", 00:14:10.011 "traddr": "10.0.0.3", 00:14:10.011 "adrfam": "ipv4", 00:14:10.011 "trsvcid": "4420", 00:14:10.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.011 "hdgst": false, 00:14:10.011 "ddgst": false 00:14:10.011 }, 00:14:10.011 "method": "bdev_nvme_attach_controller" 00:14:10.011 }' 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:10.011 { 00:14:10.011 "params": { 00:14:10.011 "name": "Nvme$subsystem", 00:14:10.011 "trtype": "$TEST_TRANSPORT", 00:14:10.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.011 "adrfam": "ipv4", 00:14:10.011 "trsvcid": "$NVMF_PORT", 00:14:10.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.011 "hdgst": ${hdgst:-false}, 00:14:10.011 "ddgst": ${ddgst:-false} 00:14:10.011 }, 00:14:10.011 "method": "bdev_nvme_attach_controller" 00:14:10.011 } 00:14:10.011 EOF 00:14:10.011 )") 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:10.011 "params": { 00:14:10.011 "name": "Nvme1", 00:14:10.011 "trtype": "tcp", 00:14:10.011 "traddr": "10.0.0.3", 00:14:10.011 "adrfam": "ipv4", 00:14:10.011 "trsvcid": "4420", 00:14:10.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.011 "hdgst": false, 00:14:10.011 "ddgst": false 00:14:10.011 }, 00:14:10.011 "method": "bdev_nvme_attach_controller" 00:14:10.011 }' 00:14:10.011 [2024-12-06 13:08:16.461132] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:10.011 [2024-12-06 13:08:16.461231] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:10.011 [2024-12-06 13:08:16.461515] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:10.011 [2024-12-06 13:08:16.461584] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:10.011 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67878 00:14:10.011 [2024-12-06 13:08:16.479296] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:10.011 [2024-12-06 13:08:16.479392] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:10.011 [2024-12-06 13:08:16.497551] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:10.011 [2024-12-06 13:08:16.497652] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:10.278 [2024-12-06 13:08:16.657632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.278 [2024-12-06 13:08:16.688973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:10.278 [2024-12-06 13:08:16.693510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.278 [2024-12-06 13:08:16.725980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:10.278 [2024-12-06 13:08:16.738538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.278 [2024-12-06 13:08:16.771689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.278 [2024-12-06 13:08:16.787897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.536 Running I/O for 1 seconds... 00:14:10.536 [2024-12-06 13:08:16.819045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:14:10.536 Running I/O for 1 seconds... 00:14:10.536 Running I/O for 1 seconds... 00:14:10.536 Running I/O for 1 seconds... 00:14:11.467 6107.00 IOPS, 23.86 MiB/s 00:14:11.467 Latency(us) 00:14:11.467 [2024-12-06T13:08:17.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.468 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:11.468 Nvme1n1 : 1.02 6123.37 23.92 0.00 0.00 20614.73 9234.62 32887.16 00:14:11.468 [2024-12-06T13:08:17.996Z] =================================================================================================================== 00:14:11.468 [2024-12-06T13:08:17.996Z] Total : 6123.37 23.92 0.00 0.00 20614.73 9234.62 32887.16 00:14:11.468 178648.00 IOPS, 697.84 MiB/s 00:14:11.468 Latency(us) 00:14:11.468 [2024-12-06T13:08:17.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.468 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:11.468 Nvme1n1 : 1.00 178239.63 696.25 0.00 0.00 713.90 318.37 2249.08 00:14:11.468 [2024-12-06T13:08:17.996Z] =================================================================================================================== 00:14:11.468 [2024-12-06T13:08:17.996Z] Total : 178239.63 696.25 0.00 0.00 713.90 318.37 2249.08 00:14:11.468 8415.00 IOPS, 32.87 MiB/s 00:14:11.468 Latency(us) 00:14:11.468 [2024-12-06T13:08:17.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.468 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:11.468 Nvme1n1 : 1.01 8472.77 33.10 0.00 0.00 15031.87 7477.06 25261.15 00:14:11.468 [2024-12-06T13:08:17.996Z] =================================================================================================================== 00:14:11.468 [2024-12-06T13:08:17.996Z] Total : 8472.77 33.10 0.00 0.00 15031.87 7477.06 25261.15 00:14:11.468 6267.00 IOPS, 24.48 MiB/s 00:14:11.468 Latency(us) 00:14:11.468 [2024-12-06T13:08:17.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.468 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:11.468 Nvme1n1 : 1.01 6377.76 24.91 0.00 0.00 20018.76 3321.48 46232.67 00:14:11.468 [2024-12-06T13:08:17.996Z] =================================================================================================================== 00:14:11.468 [2024-12-06T13:08:17.996Z] Total : 6377.76 24.91 0.00 0.00 20018.76 3321.48 46232.67 00:14:11.726 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67880 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67882 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67885 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.727 rmmod nvme_tcp 00:14:11.727 rmmod nvme_fabrics 00:14:11.727 rmmod nvme_keyring 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67833 ']' 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67833 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67833 ']' 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67833 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67833 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.727 killing process with pid 67833 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67833' 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67833 00:14:11.727 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67833 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:11.986 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:14:12.245 ************************************ 00:14:12.245 END TEST nvmf_bdev_io_wait 00:14:12.245 ************************************ 00:14:12.245 00:14:12.245 real 0m3.324s 00:14:12.245 user 0m13.313s 00:14:12.245 sys 0m1.794s 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:12.245 ************************************ 00:14:12.245 START TEST nvmf_queue_depth 00:14:12.245 ************************************ 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:12.245 * Looking for test storage... 00:14:12.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:14:12.245 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:12.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.505 --rc genhtml_branch_coverage=1 00:14:12.505 --rc genhtml_function_coverage=1 00:14:12.505 --rc genhtml_legend=1 00:14:12.505 --rc geninfo_all_blocks=1 00:14:12.505 --rc geninfo_unexecuted_blocks=1 00:14:12.505 00:14:12.505 ' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:12.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.505 --rc genhtml_branch_coverage=1 00:14:12.505 --rc genhtml_function_coverage=1 00:14:12.505 --rc genhtml_legend=1 00:14:12.505 --rc geninfo_all_blocks=1 00:14:12.505 --rc geninfo_unexecuted_blocks=1 00:14:12.505 00:14:12.505 ' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:12.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.505 --rc genhtml_branch_coverage=1 00:14:12.505 --rc genhtml_function_coverage=1 00:14:12.505 --rc genhtml_legend=1 00:14:12.505 --rc geninfo_all_blocks=1 00:14:12.505 --rc geninfo_unexecuted_blocks=1 00:14:12.505 00:14:12.505 ' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:12.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.505 --rc genhtml_branch_coverage=1 00:14:12.505 --rc genhtml_function_coverage=1 00:14:12.505 --rc genhtml_legend=1 00:14:12.505 --rc geninfo_all_blocks=1 00:14:12.505 --rc geninfo_unexecuted_blocks=1 00:14:12.505 00:14:12.505 ' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.505 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.506 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:12.506 Cannot find device "nvmf_init_br" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:12.506 Cannot find device "nvmf_init_br2" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:12.506 Cannot find device "nvmf_tgt_br" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.506 Cannot find device "nvmf_tgt_br2" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:12.506 Cannot find device "nvmf_init_br" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:12.506 Cannot find device "nvmf_init_br2" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:12.506 Cannot find device "nvmf_tgt_br" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:12.506 Cannot find device "nvmf_tgt_br2" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:12.506 Cannot find device "nvmf_br" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:12.506 Cannot find device "nvmf_init_if" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:12.506 Cannot find device "nvmf_init_if2" 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.506 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.506 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:12.506 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.506 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.765 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:12.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:14:12.766 00:14:12.766 --- 10.0.0.3 ping statistics --- 00:14:12.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.766 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:12.766 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:12.766 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:14:12.766 00:14:12.766 --- 10.0.0.4 ping statistics --- 00:14:12.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.766 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:12.766 00:14:12.766 --- 10.0.0.1 ping statistics --- 00:14:12.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.766 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:12.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:12.766 00:14:12.766 --- 10.0.0.2 ping statistics --- 00:14:12.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.766 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:12.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68138 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68138 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68138 ']' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.766 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.026 [2024-12-06 13:08:19.341665] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:13.026 [2024-12-06 13:08:19.341774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.026 [2024-12-06 13:08:19.495459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.026 [2024-12-06 13:08:19.528125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.026 [2024-12-06 13:08:19.528179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.026 [2024-12-06 13:08:19.528191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.026 [2024-12-06 13:08:19.528199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.026 [2024-12-06 13:08:19.528206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.026 [2024-12-06 13:08:19.528506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 [2024-12-06 13:08:19.659535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 Malloc0 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 [2024-12-06 13:08:19.702233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68174 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68174 /var/tmp/bdevperf.sock 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68174 ']' 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.286 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 [2024-12-06 13:08:19.760677] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:13.286 [2024-12-06 13:08:19.760968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68174 ] 00:14:13.545 [2024-12-06 13:08:19.909799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.545 [2024-12-06 13:08:19.948886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.546 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.546 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:13.546 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:13.546 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.546 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.804 NVMe0n1 00:14:13.804 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.804 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.804 Running I/O for 10 seconds... 00:14:15.757 7345.00 IOPS, 28.69 MiB/s [2024-12-06T13:08:23.713Z] 7707.50 IOPS, 30.11 MiB/s [2024-12-06T13:08:24.646Z] 7937.00 IOPS, 31.00 MiB/s [2024-12-06T13:08:25.580Z] 8017.75 IOPS, 31.32 MiB/s [2024-12-06T13:08:26.516Z] 8050.00 IOPS, 31.45 MiB/s [2024-12-06T13:08:27.445Z] 8171.33 IOPS, 31.92 MiB/s [2024-12-06T13:08:28.375Z] 8175.71 IOPS, 31.94 MiB/s [2024-12-06T13:08:29.316Z] 8188.12 IOPS, 31.98 MiB/s [2024-12-06T13:08:30.713Z] 8227.67 IOPS, 32.14 MiB/s [2024-12-06T13:08:30.713Z] 8280.70 IOPS, 32.35 MiB/s 00:14:24.185 Latency(us) 00:14:24.185 [2024-12-06T13:08:30.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.185 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:24.185 Verification LBA range: start 0x0 length 0x4000 00:14:24.185 NVMe0n1 : 10.08 8313.54 32.47 0.00 0.00 122649.13 27763.43 113436.86 00:14:24.185 [2024-12-06T13:08:30.713Z] =================================================================================================================== 00:14:24.185 [2024-12-06T13:08:30.713Z] Total : 8313.54 32.47 0.00 0.00 122649.13 27763.43 113436.86 00:14:24.185 { 00:14:24.185 "results": [ 00:14:24.185 { 00:14:24.185 "job": "NVMe0n1", 00:14:24.185 "core_mask": "0x1", 00:14:24.185 "workload": "verify", 00:14:24.185 "status": "finished", 00:14:24.185 "verify_range": { 00:14:24.185 "start": 0, 00:14:24.185 "length": 16384 00:14:24.185 }, 00:14:24.185 "queue_depth": 1024, 00:14:24.185 "io_size": 4096, 00:14:24.185 "runtime": 10.083676, 00:14:24.185 "iops": 8313.535659019588, 00:14:24.185 "mibps": 32.474748668045265, 00:14:24.185 "io_failed": 0, 00:14:24.185 "io_timeout": 0, 00:14:24.185 "avg_latency_us": 122649.1335682504, 00:14:24.185 "min_latency_us": 27763.432727272728, 00:14:24.185 "max_latency_us": 113436.85818181818 00:14:24.185 } 00:14:24.185 ], 00:14:24.185 "core_count": 1 00:14:24.185 } 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68174 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68174 ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68174 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68174 00:14:24.185 killing process with pid 68174 00:14:24.185 Received shutdown signal, test time was about 10.000000 seconds 00:14:24.185 00:14:24.185 Latency(us) 00:14:24.185 [2024-12-06T13:08:30.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.185 [2024-12-06T13:08:30.713Z] =================================================================================================================== 00:14:24.185 [2024-12-06T13:08:30.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68174' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68174 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68174 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.185 rmmod nvme_tcp 00:14:24.185 rmmod nvme_fabrics 00:14:24.185 rmmod nvme_keyring 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68138 ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68138 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68138 ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68138 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68138 00:14:24.185 killing process with pid 68138 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68138' 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68138 00:14:24.185 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68138 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:24.444 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:24.702 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:24.702 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:14:24.702 00:14:24.702 real 0m12.444s 00:14:24.702 user 0m21.318s 00:14:24.702 sys 0m1.860s 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.702 ************************************ 00:14:24.702 END TEST nvmf_queue_depth 00:14:24.702 ************************************ 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:24.702 13:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.703 13:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.703 13:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:24.703 ************************************ 00:14:24.703 START TEST nvmf_target_multipath 00:14:24.703 ************************************ 00:14:24.703 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:24.703 * Looking for test storage... 00:14:24.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:24.703 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:24.703 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:14:24.703 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.963 --rc genhtml_branch_coverage=1 00:14:24.963 --rc genhtml_function_coverage=1 00:14:24.963 --rc genhtml_legend=1 00:14:24.963 --rc geninfo_all_blocks=1 00:14:24.963 --rc geninfo_unexecuted_blocks=1 00:14:24.963 00:14:24.963 ' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.963 --rc genhtml_branch_coverage=1 00:14:24.963 --rc genhtml_function_coverage=1 00:14:24.963 --rc genhtml_legend=1 00:14:24.963 --rc geninfo_all_blocks=1 00:14:24.963 --rc geninfo_unexecuted_blocks=1 00:14:24.963 00:14:24.963 ' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.963 --rc genhtml_branch_coverage=1 00:14:24.963 --rc genhtml_function_coverage=1 00:14:24.963 --rc genhtml_legend=1 00:14:24.963 --rc geninfo_all_blocks=1 00:14:24.963 --rc geninfo_unexecuted_blocks=1 00:14:24.963 00:14:24.963 ' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.963 --rc genhtml_branch_coverage=1 00:14:24.963 --rc genhtml_function_coverage=1 00:14:24.963 --rc genhtml_legend=1 00:14:24.963 --rc geninfo_all_blocks=1 00:14:24.963 --rc geninfo_unexecuted_blocks=1 00:14:24.963 00:14:24.963 ' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.963 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:24.964 Cannot find device "nvmf_init_br" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:24.964 Cannot find device "nvmf_init_br2" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:24.964 Cannot find device "nvmf_tgt_br" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.964 Cannot find device "nvmf_tgt_br2" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:24.964 Cannot find device "nvmf_init_br" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:24.964 Cannot find device "nvmf_init_br2" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:24.964 Cannot find device "nvmf_tgt_br" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:24.964 Cannot find device "nvmf_tgt_br2" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:24.964 Cannot find device "nvmf_br" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:24.964 Cannot find device "nvmf_init_if" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:24.964 Cannot find device "nvmf_init_if2" 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:14:24.964 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:25.223 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:25.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:25.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:14:25.224 00:14:25.224 --- 10.0.0.3 ping statistics --- 00:14:25.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.224 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:25.224 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:25.224 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:25.224 00:14:25.224 --- 10.0.0.4 ping statistics --- 00:14:25.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.224 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:25.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:25.224 00:14:25.224 --- 10.0.0.1 ping statistics --- 00:14:25.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.224 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:25.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:14:25.224 00:14:25.224 --- 10.0.0.2 ping statistics --- 00:14:25.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.224 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:25.224 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68550 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68550 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68550 ']' 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.483 13:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:25.483 [2024-12-06 13:08:31.831611] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:25.483 [2024-12-06 13:08:31.831743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.483 [2024-12-06 13:08:31.987857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.742 [2024-12-06 13:08:32.029740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.742 [2024-12-06 13:08:32.029807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.742 [2024-12-06 13:08:32.029821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.742 [2024-12-06 13:08:32.029832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.742 [2024-12-06 13:08:32.029844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.742 [2024-12-06 13:08:32.030809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.742 [2024-12-06 13:08:32.030916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.742 [2024-12-06 13:08:32.030981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.742 [2024-12-06 13:08:32.030985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.742 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.742 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:14:25.742 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.742 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:25.742 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:25.742 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.742 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:26.001 [2024-12-06 13:08:32.459445] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.001 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:26.259 Malloc0 00:14:26.259 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:26.825 13:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:27.084 13:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:27.342 [2024-12-06 13:08:33.616185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.342 13:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:14:27.600 [2024-12-06 13:08:33.880396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:14:27.600 13:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:27.601 13:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:14:27.859 13:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.859 13:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:14:27.859 13:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.859 13:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:27.859 13:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:14:29.813 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68674 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:30.071 13:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:14:30.071 [global] 00:14:30.071 thread=1 00:14:30.071 invalidate=1 00:14:30.071 rw=randrw 00:14:30.071 time_based=1 00:14:30.071 runtime=6 00:14:30.071 ioengine=libaio 00:14:30.071 direct=1 00:14:30.071 bs=4096 00:14:30.071 iodepth=128 00:14:30.071 norandommap=0 00:14:30.071 numjobs=1 00:14:30.071 00:14:30.071 verify_dump=1 00:14:30.071 verify_backlog=512 00:14:30.071 verify_state_save=0 00:14:30.071 do_verify=1 00:14:30.071 verify=crc32c-intel 00:14:30.071 [job0] 00:14:30.071 filename=/dev/nvme0n1 00:14:30.071 Could not set queue depth (nvme0n1) 00:14:30.071 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:30.071 fio-3.35 00:14:30.071 Starting 1 thread 00:14:31.006 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:31.264 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:31.524 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:32.459 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:32.459 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:32.459 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:32.459 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:33.024 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:33.283 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:34.218 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:34.218 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:34.218 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:34.218 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68674 00:14:36.746 00:14:36.746 job0: (groupid=0, jobs=1): err= 0: pid=68701: Fri Dec 6 13:08:42 2024 00:14:36.746 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(245MiB/6004msec) 00:14:36.746 slat (usec): min=2, max=5668, avg=55.61, stdev=254.83 00:14:36.746 clat (usec): min=2112, max=18401, avg=8367.39, stdev=1406.49 00:14:36.746 lat (usec): min=2393, max=18412, avg=8423.00, stdev=1417.62 00:14:36.746 clat percentiles (usec): 00:14:36.746 | 1.00th=[ 5080], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7439], 00:14:36.746 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8455], 00:14:36.746 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[11076], 00:14:36.746 | 99.00th=[12780], 99.50th=[13829], 99.90th=[16319], 99.95th=[17433], 00:14:36.746 | 99.99th=[17695] 00:14:36.746 bw ( KiB/s): min= 7744, max=29088, per=51.31%, avg=21425.45, stdev=7126.39, samples=11 00:14:36.746 iops : min= 1936, max= 7272, avg=5356.36, stdev=1781.60, samples=11 00:14:36.746 write: IOPS=6176, BW=24.1MiB/s (25.3MB/s)(128MiB/5305msec); 0 zone resets 00:14:36.746 slat (usec): min=4, max=6380, avg=66.47, stdev=175.65 00:14:36.746 clat (usec): min=1452, max=17738, avg=7229.31, stdev=1170.08 00:14:36.746 lat (usec): min=1486, max=18654, avg=7295.78, stdev=1175.97 00:14:36.746 clat percentiles (usec): 00:14:36.746 | 1.00th=[ 4080], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6521], 00:14:36.746 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7373], 00:14:36.746 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 9241], 00:14:36.746 | 99.00th=[11076], 99.50th=[12256], 99.90th=[13435], 99.95th=[14615], 00:14:36.746 | 99.99th=[15795] 00:14:36.746 bw ( KiB/s): min= 8192, max=28448, per=87.04%, avg=21504.00, stdev=6881.19, samples=11 00:14:36.746 iops : min= 2048, max= 7112, avg=5376.00, stdev=1720.30, samples=11 00:14:36.746 lat (msec) : 2=0.01%, 4=0.32%, 10=92.46%, 20=7.22% 00:14:36.746 cpu : usr=5.53%, sys=21.94%, ctx=5967, majf=0, minf=114 00:14:36.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:36.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:36.746 issued rwts: total=62674,32765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:36.746 00:14:36.746 Run status group 0 (all jobs): 00:14:36.746 READ: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=245MiB (257MB), run=6004-6004msec 00:14:36.746 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=128MiB (134MB), run=5305-5305msec 00:14:36.746 00:14:36.746 Disk stats (read/write): 00:14:36.746 nvme0n1: ios=61637/32244, merge=0/0, ticks=484060/217730, in_queue=701790, util=98.58% 00:14:36.746 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:36.746 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:14:37.004 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:37.935 13:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:37.935 13:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:37.935 13:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:37.935 13:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:14:37.935 13:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68833 00:14:37.935 13:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:37.935 13:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:14:37.935 [global] 00:14:37.935 thread=1 00:14:37.935 invalidate=1 00:14:37.935 rw=randrw 00:14:37.935 time_based=1 00:14:37.935 runtime=6 00:14:37.935 ioengine=libaio 00:14:37.935 direct=1 00:14:37.935 bs=4096 00:14:37.935 iodepth=128 00:14:37.935 norandommap=0 00:14:37.935 numjobs=1 00:14:37.935 00:14:37.935 verify_dump=1 00:14:37.935 verify_backlog=512 00:14:37.935 verify_state_save=0 00:14:37.935 do_verify=1 00:14:37.935 verify=crc32c-intel 00:14:37.935 [job0] 00:14:37.935 filename=/dev/nvme0n1 00:14:37.935 Could not set queue depth (nvme0n1) 00:14:38.192 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.192 fio-3.35 00:14:38.192 Starting 1 thread 00:14:39.125 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:39.383 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:39.642 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:40.576 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:40.576 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:40.576 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:40.576 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:41.141 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:41.400 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:42.342 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:42.343 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:42.343 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:42.343 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68833 00:14:44.245 00:14:44.245 job0: (groupid=0, jobs=1): err= 0: pid=68854: Fri Dec 6 13:08:50 2024 00:14:44.245 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(275MiB/6004msec) 00:14:44.245 slat (usec): min=3, max=7916, avg=42.22, stdev=209.09 00:14:44.245 clat (usec): min=247, max=48892, avg=7514.91, stdev=1989.47 00:14:44.245 lat (usec): min=301, max=48901, avg=7557.13, stdev=2002.92 00:14:44.245 clat percentiles (usec): 00:14:44.245 | 1.00th=[ 2474], 5.00th=[ 4178], 10.00th=[ 5014], 20.00th=[ 5997], 00:14:44.245 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 7963], 00:14:44.245 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10421], 00:14:44.245 | 99.00th=[12649], 99.50th=[13566], 99.90th=[17171], 99.95th=[18482], 00:14:44.245 | 99.99th=[48497] 00:14:44.245 bw ( KiB/s): min= 4992, max=44416, per=53.92%, avg=25299.64, stdev=11727.74, samples=11 00:14:44.245 iops : min= 1248, max=11104, avg=6324.91, stdev=2931.93, samples=11 00:14:44.245 write: IOPS=7549, BW=29.5MiB/s (30.9MB/s)(148MiB/5017msec); 0 zone resets 00:14:44.245 slat (usec): min=6, max=2475, avg=55.47, stdev=121.97 00:14:44.245 clat (usec): min=432, max=18626, avg=6144.16, stdev=1856.58 00:14:44.245 lat (usec): min=469, max=19496, avg=6199.63, stdev=1867.05 00:14:44.245 clat percentiles (usec): 00:14:44.245 | 1.00th=[ 1778], 5.00th=[ 3032], 10.00th=[ 3654], 20.00th=[ 4424], 00:14:44.245 | 30.00th=[ 5145], 40.00th=[ 5866], 50.00th=[ 6456], 60.00th=[ 6849], 00:14:44.245 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8094], 95.00th=[ 8848], 00:14:44.245 | 99.00th=[10683], 99.50th=[11731], 99.90th=[13829], 99.95th=[15008], 00:14:44.245 | 99.99th=[16319] 00:14:44.245 bw ( KiB/s): min= 4608, max=43800, per=83.82%, avg=25310.55, stdev=11579.95, samples=11 00:14:44.245 iops : min= 1152, max=10950, avg=6327.64, stdev=2894.99, samples=11 00:14:44.245 lat (usec) : 250=0.01%, 500=0.01%, 750=0.05%, 1000=0.09% 00:14:44.245 lat (msec) : 2=0.70%, 4=6.97%, 10=87.07%, 20=5.11%, 50=0.01% 00:14:44.245 cpu : usr=7.50%, sys=28.95%, ctx=8735, majf=0, minf=199 00:14:44.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:14:44.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.245 issued rwts: total=70431,37874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.245 00:14:44.245 Run status group 0 (all jobs): 00:14:44.245 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=275MiB (288MB), run=6004-6004msec 00:14:44.245 WRITE: bw=29.5MiB/s (30.9MB/s), 29.5MiB/s-29.5MiB/s (30.9MB/s-30.9MB/s), io=148MiB (155MB), run=5017-5017msec 00:14:44.245 00:14:44.245 Disk stats (read/write): 00:14:44.245 nvme0n1: ios=69351/37322, merge=0/0, ticks=471247/196968, in_queue=668215, util=98.65% 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:14:44.245 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.810 rmmod nvme_tcp 00:14:44.810 rmmod nvme_fabrics 00:14:44.810 rmmod nvme_keyring 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68550 ']' 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68550 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68550 ']' 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68550 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68550 00:14:44.810 killing process with pid 68550 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68550' 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68550 00:14:44.810 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68550 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:14:45.068 ************************************ 00:14:45.068 END TEST nvmf_target_multipath 00:14:45.068 ************************************ 00:14:45.068 00:14:45.068 real 0m20.458s 00:14:45.068 user 1m19.997s 00:14:45.068 sys 0m6.754s 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.068 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 ************************************ 00:14:45.328 START TEST nvmf_zcopy 00:14:45.328 ************************************ 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:45.328 * Looking for test storage... 00:14:45.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:45.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.328 --rc genhtml_branch_coverage=1 00:14:45.328 --rc genhtml_function_coverage=1 00:14:45.328 --rc genhtml_legend=1 00:14:45.328 --rc geninfo_all_blocks=1 00:14:45.328 --rc geninfo_unexecuted_blocks=1 00:14:45.328 00:14:45.328 ' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:45.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.328 --rc genhtml_branch_coverage=1 00:14:45.328 --rc genhtml_function_coverage=1 00:14:45.328 --rc genhtml_legend=1 00:14:45.328 --rc geninfo_all_blocks=1 00:14:45.328 --rc geninfo_unexecuted_blocks=1 00:14:45.328 00:14:45.328 ' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:45.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.328 --rc genhtml_branch_coverage=1 00:14:45.328 --rc genhtml_function_coverage=1 00:14:45.328 --rc genhtml_legend=1 00:14:45.328 --rc geninfo_all_blocks=1 00:14:45.328 --rc geninfo_unexecuted_blocks=1 00:14:45.328 00:14:45.328 ' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:45.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.328 --rc genhtml_branch_coverage=1 00:14:45.328 --rc genhtml_function_coverage=1 00:14:45.328 --rc genhtml_legend=1 00:14:45.328 --rc geninfo_all_blocks=1 00:14:45.328 --rc geninfo_unexecuted_blocks=1 00:14:45.328 00:14:45.328 ' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.328 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.329 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:45.329 Cannot find device "nvmf_init_br" 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:45.329 Cannot find device "nvmf_init_br2" 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:45.329 Cannot find device "nvmf_tgt_br" 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:14:45.329 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.588 Cannot find device "nvmf_tgt_br2" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:45.588 Cannot find device "nvmf_init_br" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:45.588 Cannot find device "nvmf_init_br2" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:45.588 Cannot find device "nvmf_tgt_br" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:45.588 Cannot find device "nvmf_tgt_br2" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:45.588 Cannot find device "nvmf_br" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:45.588 Cannot find device "nvmf_init_if" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:45.588 Cannot find device "nvmf_init_if2" 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.588 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:45.588 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:45.589 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.589 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.589 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.589 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:45.589 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:45.589 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:45.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:45.847 00:14:45.847 --- 10.0.0.3 ping statistics --- 00:14:45.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.847 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:45.847 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:45.847 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:14:45.847 00:14:45.847 --- 10.0.0.4 ping statistics --- 00:14:45.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.847 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:45.847 00:14:45.847 --- 10.0.0.1 ping statistics --- 00:14:45.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.847 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:45.847 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:45.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:14:45.847 00:14:45.848 --- 10.0.0.2 ping statistics --- 00:14:45.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.848 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69193 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69193 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69193 ']' 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.848 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.848 [2024-12-06 13:08:52.284291] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:45.848 [2024-12-06 13:08:52.284378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.106 [2024-12-06 13:08:52.432899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.106 [2024-12-06 13:08:52.477954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.106 [2024-12-06 13:08:52.478010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.106 [2024-12-06 13:08:52.478025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.106 [2024-12-06 13:08:52.478036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.106 [2024-12-06 13:08:52.478045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.106 [2024-12-06 13:08:52.478425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 [2024-12-06 13:08:52.612802] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.106 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 [2024-12-06 13:08:52.628926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.365 malloc0 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:46.365 { 00:14:46.365 "params": { 00:14:46.365 "name": "Nvme$subsystem", 00:14:46.365 "trtype": "$TEST_TRANSPORT", 00:14:46.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:46.365 "adrfam": "ipv4", 00:14:46.365 "trsvcid": "$NVMF_PORT", 00:14:46.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:46.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:46.365 "hdgst": ${hdgst:-false}, 00:14:46.365 "ddgst": ${ddgst:-false} 00:14:46.365 }, 00:14:46.365 "method": "bdev_nvme_attach_controller" 00:14:46.365 } 00:14:46.365 EOF 00:14:46.365 )") 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:46.365 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:46.365 "params": { 00:14:46.365 "name": "Nvme1", 00:14:46.365 "trtype": "tcp", 00:14:46.365 "traddr": "10.0.0.3", 00:14:46.365 "adrfam": "ipv4", 00:14:46.365 "trsvcid": "4420", 00:14:46.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.365 "hdgst": false, 00:14:46.365 "ddgst": false 00:14:46.365 }, 00:14:46.365 "method": "bdev_nvme_attach_controller" 00:14:46.365 }' 00:14:46.365 [2024-12-06 13:08:52.722560] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:46.365 [2024-12-06 13:08:52.722655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69225 ] 00:14:46.365 [2024-12-06 13:08:52.871724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.623 [2024-12-06 13:08:52.913166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.623 Running I/O for 10 seconds... 00:14:48.928 5633.00 IOPS, 44.01 MiB/s [2024-12-06T13:08:56.390Z] 5622.00 IOPS, 43.92 MiB/s [2024-12-06T13:08:57.324Z] 5670.00 IOPS, 44.30 MiB/s [2024-12-06T13:08:58.259Z] 5678.50 IOPS, 44.36 MiB/s [2024-12-06T13:08:59.195Z] 5694.20 IOPS, 44.49 MiB/s [2024-12-06T13:09:00.128Z] 5704.17 IOPS, 44.56 MiB/s [2024-12-06T13:09:01.091Z] 5663.57 IOPS, 44.25 MiB/s [2024-12-06T13:09:02.465Z] 5653.62 IOPS, 44.17 MiB/s [2024-12-06T13:09:03.401Z] 5665.67 IOPS, 44.26 MiB/s [2024-12-06T13:09:03.401Z] 5672.70 IOPS, 44.32 MiB/s 00:14:56.873 Latency(us) 00:14:56.873 [2024-12-06T13:09:03.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.873 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:56.873 Verification LBA range: start 0x0 length 0x1000 00:14:56.873 Nvme1n1 : 10.01 5674.01 44.33 0.00 0.00 22486.04 426.36 34555.35 00:14:56.873 [2024-12-06T13:09:03.401Z] =================================================================================================================== 00:14:56.873 [2024-12-06T13:09:03.401Z] Total : 5674.01 44.33 0.00 0.00 22486.04 426.36 34555.35 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69348 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:56.873 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:56.873 { 00:14:56.873 "params": { 00:14:56.873 "name": "Nvme$subsystem", 00:14:56.873 "trtype": "$TEST_TRANSPORT", 00:14:56.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.873 "adrfam": "ipv4", 00:14:56.873 "trsvcid": "$NVMF_PORT", 00:14:56.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.873 "hdgst": ${hdgst:-false}, 00:14:56.873 "ddgst": ${ddgst:-false} 00:14:56.873 }, 00:14:56.873 "method": "bdev_nvme_attach_controller" 00:14:56.874 } 00:14:56.874 EOF 00:14:56.874 )") 00:14:56.874 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:56.874 [2024-12-06 13:09:03.220982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.221036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:56.874 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:56.874 "params": { 00:14:56.874 "name": "Nvme1", 00:14:56.874 "trtype": "tcp", 00:14:56.874 "traddr": "10.0.0.3", 00:14:56.874 "adrfam": "ipv4", 00:14:56.874 "trsvcid": "4420", 00:14:56.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.874 "hdgst": false, 00:14:56.874 "ddgst": false 00:14:56.874 }, 00:14:56.874 "method": "bdev_nvme_attach_controller" 00:14:56.874 }' 00:14:56.874 [2024-12-06 13:09:03.228907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.228979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.236927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.236989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.244920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.244987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.252931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.252998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.260927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.260994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.271730] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:56.874 [2024-12-06 13:09:03.271816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69348 ] 00:14:56.874 [2024-12-06 13:09:03.272899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.272934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.284897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.284932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.296900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.296934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.304886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.304913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.316932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.316988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.328929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.328981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.340905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.340932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.348894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.348920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.356894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.356919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.364903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.364931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.372900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.372925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.380897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.380923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.388905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.388931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.874 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:56.874 [2024-12-06 13:09:03.396912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.874 [2024-12-06 13:09:03.396938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.404908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.404934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.416933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.416975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.422198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.134 [2024-12-06 13:09:03.428972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.429013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.441012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.441056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.453004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.453048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.459592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.134 [2024-12-06 13:09:03.464964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.464993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.472962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.472994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.484992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.485031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.496995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.497036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.509012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.509055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.520983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.521019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.532999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.533040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.540990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.541023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.552999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.553031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.561005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.561036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.573054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.573108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.585040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.134 [2024-12-06 13:09:03.585079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.134 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.134 [2024-12-06 13:09:03.593014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.135 [2024-12-06 13:09:03.593049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.135 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.135 Running I/O for 5 seconds... 00:14:57.135 [2024-12-06 13:09:03.601007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.135 [2024-12-06 13:09:03.601036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.135 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.135 [2024-12-06 13:09:03.612536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.135 [2024-12-06 13:09:03.612572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.135 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.135 [2024-12-06 13:09:03.622416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.135 [2024-12-06 13:09:03.622457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.135 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.135 [2024-12-06 13:09:03.634894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.135 [2024-12-06 13:09:03.634939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.135 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.135 [2024-12-06 13:09:03.650283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.135 [2024-12-06 13:09:03.650336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.135 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.666230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.666275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.683277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.683323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.699655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.699716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.710810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.710860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.722625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.722664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.738665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.738709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.749603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.749776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.765032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.394 [2024-12-06 13:09:03.765197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.394 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.394 [2024-12-06 13:09:03.775987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.776145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.788033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.788276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.801007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.801193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.813276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.813446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.829625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.829818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.840747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.841011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.852373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.852540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.864036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.864091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.876976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.877017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.894824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.894864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.395 [2024-12-06 13:09:03.910882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.395 [2024-12-06 13:09:03.910923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.395 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:03.921849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:03.921892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:03.933431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:03.933628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:03.949518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:03.949746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:03.966656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:03.966889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:03.983928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:03.984209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.001110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.001161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.016415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.016465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.027785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.027976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.039162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.039330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.051020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.051196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.062910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.063094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.080136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.080337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.096886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.097090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.113333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.113592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.130050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.130093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.149352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.149394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.655 [2024-12-06 13:09:04.165602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.655 [2024-12-06 13:09:04.165766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.655 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.182399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.182570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.197371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.197533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.207520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.207677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.219395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.219667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.234706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.234927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.251403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.251563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.267212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.267369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.283198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.283396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.301574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.301775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.318320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.318374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.334984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.335036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.351803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.352035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.369494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.369732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.386370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.386564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.403312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.403356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.419591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.419641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.915 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:57.915 [2024-12-06 13:09:04.435875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.915 [2024-12-06 13:09:04.436083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.452557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.452739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.468777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.468976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.485068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.485234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.496068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.496228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.507284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.507438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.518764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.518917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.530608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.530696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.546121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.546283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.556913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.557088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.572348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.572502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.589004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.589044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 10521.00 IOPS, 82.20 MiB/s [2024-12-06T13:09:04.703Z] [2024-12-06 13:09:04.604772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.604929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.621419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.621577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.638150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.638367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.654684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.654723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.665240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.665278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.680581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.680624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.175 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.175 [2024-12-06 13:09:04.696670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.175 [2024-12-06 13:09:04.696843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.713766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.713930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.730038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.730243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.746785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.746954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.763639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.763807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.780231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.780411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.796722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.796876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.813477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.813633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.829380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.829420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.840398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.840450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.856021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.856207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.867315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.867470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.882758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.882913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.893537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.893691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.909357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.909510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.925601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.925754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.936472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.936625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.435 [2024-12-06 13:09:04.947403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.435 [2024-12-06 13:09:04.947556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.435 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:04.962837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:04.963093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:04.979701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:04.979749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:04.995625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:04.995789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.012159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.012313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.028129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.028221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.039243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.039407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.050895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.051081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.066692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.066892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.082562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.082717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.099714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.099933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.116451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.116702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.133422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.133479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.149400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.149441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.166665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.166705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.182828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.183012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.199674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.199833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.700 [2024-12-06 13:09:05.215189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.700 [2024-12-06 13:09:05.215488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.700 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.232182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.232415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.248889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.249063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.265523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.265678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.281539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.281692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.299430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.299584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.315765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.315918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.332371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.332525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.348276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.348429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.365953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.366128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.381807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.382093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.393549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.393853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.411105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.411305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.427820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.428034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.440736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.440930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.454088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.454141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.466982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.467037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.958 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:58.958 [2024-12-06 13:09:05.481211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.958 [2024-12-06 13:09:05.481276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.495156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.495196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.510375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.510413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.526283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.526329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.543805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.543849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.554611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.554658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.570092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.570152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.587162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.587211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 10689.00 IOPS, 83.51 MiB/s [2024-12-06T13:09:05.746Z] [2024-12-06 13:09:05.603821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.603861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.614793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.614840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.630214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.630269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.641081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.641118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.655974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.656012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.666787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.666825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.681930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.681983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.697880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.697918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.708913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.708970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.723656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.723694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.218 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.218 [2024-12-06 13:09:05.740446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.218 [2024-12-06 13:09:05.740523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.477 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.477 [2024-12-06 13:09:05.756855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.477 [2024-12-06 13:09:05.756904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.477 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.477 [2024-12-06 13:09:05.773737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.477 [2024-12-06 13:09:05.773777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.477 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.477 [2024-12-06 13:09:05.790072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.477 [2024-12-06 13:09:05.790112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.477 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.477 [2024-12-06 13:09:05.800677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.477 [2024-12-06 13:09:05.800716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.477 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.477 [2024-12-06 13:09:05.816150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.477 [2024-12-06 13:09:05.816187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.477 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.477 [2024-12-06 13:09:05.833820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.477 [2024-12-06 13:09:05.833859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.477 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.477 [2024-12-06 13:09:05.849601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.477 [2024-12-06 13:09:05.849650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.866292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.866330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.882533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.882571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.898568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.898609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.909237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.909275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.924888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.924927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.940870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.940910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.950730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.950768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.966649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.966712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.982987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.983038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.478 2024/12/06 13:09:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.478 [2024-12-06 13:09:05.999173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.478 [2024-12-06 13:09:05.999213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.736 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.736 [2024-12-06 13:09:06.009959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.736 [2024-12-06 13:09:06.009998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.736 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.736 [2024-12-06 13:09:06.025079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.736 [2024-12-06 13:09:06.025118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.736 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.736 [2024-12-06 13:09:06.041163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.736 [2024-12-06 13:09:06.041202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.052554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.052597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.068074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.068114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.084084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.084123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.102809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.102855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.117963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.118003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.129049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.129090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.144246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.144293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.155116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.155156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.171188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.171250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.186978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.187017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.197482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.197522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.213134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.213173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.229843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.229909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.245603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.245671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.737 [2024-12-06 13:09:06.256516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.737 [2024-12-06 13:09:06.256564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.737 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.267820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.267860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.280761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.280802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.296625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.296665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.306663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.306702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.323286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.323326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.334285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.334322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.349263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.349302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.365963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.366001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.382777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.382834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.398818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.398865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.415354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.415404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.431761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.431801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.448163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.448203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.464728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.464781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.996 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.996 [2024-12-06 13:09:06.482808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.996 [2024-12-06 13:09:06.482871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.997 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.997 [2024-12-06 13:09:06.498235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.997 [2024-12-06 13:09:06.498309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.997 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.997 [2024-12-06 13:09:06.510021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.997 [2024-12-06 13:09:06.510064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.997 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.255 [2024-12-06 13:09:06.524625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.255 [2024-12-06 13:09:06.524682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.255 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.255 [2024-12-06 13:09:06.539512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.255 [2024-12-06 13:09:06.539567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.255 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.255 [2024-12-06 13:09:06.556985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.255 [2024-12-06 13:09:06.557075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.255 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.255 [2024-12-06 13:09:06.574277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.255 [2024-12-06 13:09:06.574356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.255 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.255 [2024-12-06 13:09:06.591031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.255 [2024-12-06 13:09:06.591084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.255 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.255 10790.33 IOPS, 84.30 MiB/s [2024-12-06T13:09:06.783Z] [2024-12-06 13:09:06.610886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.255 [2024-12-06 13:09:06.610990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.631712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.631762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.646186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.646260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.663826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.663898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.681899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.681981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.699355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.699416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.717301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.717364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.733294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.733360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.749343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.749399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.256 [2024-12-06 13:09:06.767985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.256 [2024-12-06 13:09:06.768025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.256 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.547 [2024-12-06 13:09:06.786868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.547 [2024-12-06 13:09:06.786912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.547 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.547 [2024-12-06 13:09:06.804446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.547 [2024-12-06 13:09:06.804514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.547 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.547 [2024-12-06 13:09:06.822606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.547 [2024-12-06 13:09:06.822652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.547 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.547 [2024-12-06 13:09:06.840266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.547 [2024-12-06 13:09:06.840314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.547 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.547 [2024-12-06 13:09:06.867509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.547 [2024-12-06 13:09:06.867697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.547 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.547 [2024-12-06 13:09:06.878734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.547 [2024-12-06 13:09:06.878881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.890134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.890271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.901362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.901513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.916567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.916732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.932838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.932876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.948843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.948881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.959261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.959300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.974296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.974335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.984318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.984357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:06.995882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:06.995919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:07.008866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:07.008905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:07.025450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:07.025494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:07.041581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:07.041620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.548 [2024-12-06 13:09:07.058776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.548 [2024-12-06 13:09:07.058826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.548 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.075805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.075853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.091776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.091837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.107690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.107756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.118008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.118064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.132386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.132423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.143033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.143070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.158310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.158352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.169273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.169312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.180315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.180355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.191682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.191724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.205648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.205685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.216731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.216784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.232322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.232377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.248343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.248386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.264356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.264395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.275199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.275237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.807 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.807 [2024-12-06 13:09:07.290140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.807 [2024-12-06 13:09:07.290178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.808 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.808 [2024-12-06 13:09:07.307635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.808 [2024-12-06 13:09:07.307673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.808 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.808 [2024-12-06 13:09:07.318851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.808 [2024-12-06 13:09:07.318889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.808 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:00.808 [2024-12-06 13:09:07.329758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.808 [2024-12-06 13:09:07.329795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.341117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.341165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.356384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.356424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.367388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.367428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.378924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.378972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.394601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.394636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.411985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.412024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.428054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.428107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.444053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.444092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.460184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.460224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.476684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.476744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.486962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.487007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.498753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.498795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.514290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.514327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.523630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.523679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.535573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.535609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.546639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.546675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.558029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.558065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.573887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.573925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.066 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.066 [2024-12-06 13:09:07.590633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.066 [2024-12-06 13:09:07.590711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 10712.75 IOPS, 83.69 MiB/s [2024-12-06T13:09:07.854Z] [2024-12-06 13:09:07.608325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.608376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.624591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.624652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.640896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.640974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.656972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.657031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.667649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.667709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.679021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.679075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.695671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.695715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.706748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.706801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.722125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.722171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.738514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.738561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.755760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.755818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.772739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.772804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.785575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.785629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.798476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.798539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.816021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.816073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.833547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.833588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.326 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.326 [2024-12-06 13:09:07.850207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.326 [2024-12-06 13:09:07.850257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.860874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.860912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.876546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.876599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.891886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.891936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.908284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.908329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.924173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.924215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.935005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.935043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.950486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.950528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.966031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.966073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.976632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.976672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:07.991578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:07.991623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.001153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.001193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.013897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.013959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.029665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.029712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.045766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.045821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.061768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.061805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.078174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.078218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.096009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.096057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.112610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.112667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.610 [2024-12-06 13:09:08.129198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.610 [2024-12-06 13:09:08.129236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.610 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.145100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.145139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.161323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.161361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.177722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.177777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.188200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.188234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.203574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.203621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.219249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.219291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.235454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.235494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.249351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.249390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.265182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.265222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.281103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.281164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.298589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.298630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.314637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.314745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.330696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.330749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.341603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.341641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.357335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.357372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.373010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.373047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:01.868 [2024-12-06 13:09:08.388716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.868 [2024-12-06 13:09:08.388756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.868 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.126 [2024-12-06 13:09:08.405858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.126 [2024-12-06 13:09:08.405900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.126 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.126 [2024-12-06 13:09:08.422162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.126 [2024-12-06 13:09:08.422202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.126 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.126 [2024-12-06 13:09:08.438219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.126 [2024-12-06 13:09:08.438269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.126 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.126 [2024-12-06 13:09:08.449001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.126 [2024-12-06 13:09:08.449037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.126 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.126 [2024-12-06 13:09:08.464192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.126 [2024-12-06 13:09:08.464229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.126 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.126 [2024-12-06 13:09:08.480034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.126 [2024-12-06 13:09:08.480072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.126 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.496247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.496287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.514074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.514118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.530352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.530397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.545792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.545835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.556089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.556132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.572788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.572831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.588706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.588749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 10767.00 IOPS, 84.12 MiB/s [2024-12-06T13:09:08.655Z] [2024-12-06 13:09:08.604268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.604306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 00:15:02.127 Latency(us) 00:15:02.127 [2024-12-06T13:09:08.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.127 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:02.127 Nvme1n1 : 5.01 10772.20 84.16 0.00 0.00 11867.95 4706.68 28120.90 00:15:02.127 [2024-12-06T13:09:08.655Z] =================================================================================================================== 00:15:02.127 [2024-12-06T13:09:08.655Z] Total : 10772.20 84.16 0.00 0.00 11867.95 4706.68 28120.90 00:15:02.127 [2024-12-06 13:09:08.614643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.614681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.626618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.626656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.638661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.638713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.127 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.127 [2024-12-06 13:09:08.650672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.127 [2024-12-06 13:09:08.650721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.385 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.385 [2024-12-06 13:09:08.662664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.385 [2024-12-06 13:09:08.662720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.385 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.385 [2024-12-06 13:09:08.674678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.385 [2024-12-06 13:09:08.674728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.385 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.385 [2024-12-06 13:09:08.686672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.385 [2024-12-06 13:09:08.686718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.385 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.385 [2024-12-06 13:09:08.698676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.385 [2024-12-06 13:09:08.698726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.385 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.385 [2024-12-06 13:09:08.710651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.385 [2024-12-06 13:09:08.710687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.385 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.385 [2024-12-06 13:09:08.722698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.386 [2024-12-06 13:09:08.722773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.386 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.386 [2024-12-06 13:09:08.734648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.386 [2024-12-06 13:09:08.734681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.386 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.386 [2024-12-06 13:09:08.746646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.386 [2024-12-06 13:09:08.746673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.386 2024/12/06 13:09:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:02.386 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69348) - No such process 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69348 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.386 delay0 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.386 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:15:02.644 [2024-12-06 13:09:08.964801] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:10.753 Initializing NVMe Controllers 00:15:10.753 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.753 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.753 Initialization complete. Launching workers. 00:15:10.753 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 18214 00:15:10.753 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18392, failed to submit 88 00:15:10.753 success 18276, unsuccessful 116, failed 0 00:15:10.753 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:10.753 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:10.753 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.753 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:15:10.753 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.753 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:15:10.753 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.753 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.753 rmmod nvme_tcp 00:15:10.753 rmmod nvme_fabrics 00:15:10.753 rmmod nvme_keyring 00:15:10.753 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.753 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:15:10.753 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69193 ']' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69193 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69193 ']' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69193 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69193 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:10.754 killing process with pid 69193 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69193' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69193 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69193 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:15:10.754 00:15:10.754 real 0m24.883s 00:15:10.754 user 0m40.000s 00:15:10.754 sys 0m6.819s 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 ************************************ 00:15:10.754 END TEST nvmf_zcopy 00:15:10.754 ************************************ 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 ************************************ 00:15:10.754 START TEST nvmf_nmic 00:15:10.754 ************************************ 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:10.754 * Looking for test storage... 00:15:10.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:10.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.754 --rc genhtml_branch_coverage=1 00:15:10.754 --rc genhtml_function_coverage=1 00:15:10.754 --rc genhtml_legend=1 00:15:10.754 --rc geninfo_all_blocks=1 00:15:10.754 --rc geninfo_unexecuted_blocks=1 00:15:10.754 00:15:10.754 ' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:10.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.754 --rc genhtml_branch_coverage=1 00:15:10.754 --rc genhtml_function_coverage=1 00:15:10.754 --rc genhtml_legend=1 00:15:10.754 --rc geninfo_all_blocks=1 00:15:10.754 --rc geninfo_unexecuted_blocks=1 00:15:10.754 00:15:10.754 ' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:10.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.754 --rc genhtml_branch_coverage=1 00:15:10.754 --rc genhtml_function_coverage=1 00:15:10.754 --rc genhtml_legend=1 00:15:10.754 --rc geninfo_all_blocks=1 00:15:10.754 --rc geninfo_unexecuted_blocks=1 00:15:10.754 00:15:10.754 ' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:10.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.754 --rc genhtml_branch_coverage=1 00:15:10.754 --rc genhtml_function_coverage=1 00:15:10.754 --rc genhtml_legend=1 00:15:10.754 --rc geninfo_all_blocks=1 00:15:10.754 --rc geninfo_unexecuted_blocks=1 00:15:10.754 00:15:10.754 ' 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.754 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:10.755 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:10.755 Cannot find device "nvmf_init_br" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:10.755 Cannot find device "nvmf_init_br2" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:10.755 Cannot find device "nvmf_tgt_br" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.755 Cannot find device "nvmf_tgt_br2" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:10.755 Cannot find device "nvmf_init_br" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:10.755 Cannot find device "nvmf_init_br2" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:10.755 Cannot find device "nvmf_tgt_br" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:10.755 Cannot find device "nvmf_tgt_br2" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:10.755 Cannot find device "nvmf_br" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:10.755 Cannot find device "nvmf_init_if" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:10.755 Cannot find device "nvmf_init_if2" 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:10.755 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:10.756 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:10.756 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:10.756 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:10.756 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:10.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:10.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:15:10.756 00:15:10.756 --- 10.0.0.3 ping statistics --- 00:15:10.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.756 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:10.756 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:10.756 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:15:10.756 00:15:10.756 --- 10.0.0.4 ping statistics --- 00:15:10.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.756 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:10.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:15:10.756 00:15:10.756 --- 10.0.0.1 ping statistics --- 00:15:10.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.756 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:10.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:10.756 00:15:10.756 --- 10.0.0.2 ping statistics --- 00:15:10.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.756 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69730 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69730 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69730 ']' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.756 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.015 [2024-12-06 13:09:17.295293] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:11.015 [2024-12-06 13:09:17.295392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.015 [2024-12-06 13:09:17.447261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.015 [2024-12-06 13:09:17.488383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.015 [2024-12-06 13:09:17.488436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.015 [2024-12-06 13:09:17.488451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.015 [2024-12-06 13:09:17.488461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.015 [2024-12-06 13:09:17.488470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.015 [2024-12-06 13:09:17.489398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.015 [2024-12-06 13:09:17.489504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.015 [2024-12-06 13:09:17.489571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.015 [2024-12-06 13:09:17.489572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 [2024-12-06 13:09:18.369522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 Malloc0 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 [2024-12-06 13:09:18.424629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:11.950 test case1: single bdev can't be used in multiple subsystems 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 [2024-12-06 13:09:18.452492] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:11.950 [2024-12-06 13:09:18.452545] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:11.950 [2024-12-06 13:09:18.452558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.950 2024/12/06 13:09:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:11.950 request: 00:15:11.950 { 00:15:11.950 "method": "nvmf_subsystem_add_ns", 00:15:11.950 "params": { 00:15:11.950 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:11.950 "namespace": { 00:15:11.950 "bdev_name": "Malloc0", 00:15:11.950 "no_auto_visible": false, 00:15:11.950 "hide_metadata": false 00:15:11.950 } 00:15:11.950 } 00:15:11.950 } 00:15:11.950 Got JSON-RPC error response 00:15:11.950 GoRPCClient: error on JSON-RPC call 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:11.950 Adding namespace failed - expected result. 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:11.950 test case2: host connect to nvmf target in multiple paths 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 [2024-12-06 13:09:18.464650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:11.950 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.951 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:12.208 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:15:12.517 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.517 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:15:12.517 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.517 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:12.517 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:15:14.418 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:14.418 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:14.418 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.418 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:14.418 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.418 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:15:14.418 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:14.418 [global] 00:15:14.418 thread=1 00:15:14.418 invalidate=1 00:15:14.418 rw=write 00:15:14.418 time_based=1 00:15:14.418 runtime=1 00:15:14.418 ioengine=libaio 00:15:14.418 direct=1 00:15:14.418 bs=4096 00:15:14.418 iodepth=1 00:15:14.418 norandommap=0 00:15:14.418 numjobs=1 00:15:14.418 00:15:14.418 verify_dump=1 00:15:14.418 verify_backlog=512 00:15:14.418 verify_state_save=0 00:15:14.418 do_verify=1 00:15:14.418 verify=crc32c-intel 00:15:14.418 [job0] 00:15:14.418 filename=/dev/nvme0n1 00:15:14.418 Could not set queue depth (nvme0n1) 00:15:14.676 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.676 fio-3.35 00:15:14.676 Starting 1 thread 00:15:15.609 00:15:15.609 job0: (groupid=0, jobs=1): err= 0: pid=69845: Fri Dec 6 13:09:22 2024 00:15:15.609 read: IOPS=2606, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:15:15.609 slat (nsec): min=15412, max=64020, avg=20665.44, stdev=4241.16 00:15:15.609 clat (usec): min=137, max=335, avg=176.68, stdev=30.79 00:15:15.609 lat (usec): min=153, max=358, avg=197.35, stdev=32.26 00:15:15.609 clat percentiles (usec): 00:15:15.609 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:15:15.609 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 176], 00:15:15.609 | 70.00th=[ 186], 80.00th=[ 202], 90.00th=[ 227], 95.00th=[ 241], 00:15:15.609 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 310], 00:15:15.609 | 99.99th=[ 334] 00:15:15.609 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:15.609 slat (usec): min=23, max=128, avg=29.34, stdev= 4.58 00:15:15.609 clat (usec): min=98, max=369, avg=124.51, stdev=22.16 00:15:15.609 lat (usec): min=123, max=410, avg=153.85, stdev=24.03 00:15:15.610 clat percentiles (usec): 00:15:15.610 | 1.00th=[ 100], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 108], 00:15:15.610 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 119], 60.00th=[ 124], 00:15:15.610 | 70.00th=[ 130], 80.00th=[ 141], 90.00th=[ 157], 95.00th=[ 169], 00:15:15.610 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 217], 99.95th=[ 277], 00:15:15.610 | 99.99th=[ 371] 00:15:15.610 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:15:15.610 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:15.610 lat (usec) : 100=0.33%, 250=98.45%, 500=1.21% 00:15:15.610 cpu : usr=2.80%, sys=10.70%, ctx=5681, majf=0, minf=5 00:15:15.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:15.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.610 issued rwts: total=2609,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:15.610 00:15:15.610 Run status group 0 (all jobs): 00:15:15.610 READ: bw=10.2MiB/s (10.7MB/s), 10.2MiB/s-10.2MiB/s (10.7MB/s-10.7MB/s), io=10.2MiB (10.7MB), run=1001-1001msec 00:15:15.610 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:15:15.610 00:15:15.610 Disk stats (read/write): 00:15:15.610 nvme0n1: ios=2478/2560, merge=0/0, ticks=601/348, in_queue=949, util=95.79% 00:15:15.610 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:15.868 rmmod nvme_tcp 00:15:15.868 rmmod nvme_fabrics 00:15:15.868 rmmod nvme_keyring 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:15:15.868 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69730 ']' 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69730 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69730 ']' 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69730 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69730 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.869 killing process with pid 69730 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69730' 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69730 00:15:15.869 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69730 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:16.156 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:15:16.421 00:15:16.421 real 0m6.155s 00:15:16.421 user 0m19.764s 00:15:16.421 sys 0m1.424s 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.421 ************************************ 00:15:16.421 END TEST nvmf_nmic 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.421 ************************************ 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:16.421 ************************************ 00:15:16.421 START TEST nvmf_fio_target 00:15:16.421 ************************************ 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:16.421 * Looking for test storage... 00:15:16.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:15:16.421 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.422 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.422 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.422 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:16.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.681 --rc genhtml_branch_coverage=1 00:15:16.681 --rc genhtml_function_coverage=1 00:15:16.681 --rc genhtml_legend=1 00:15:16.681 --rc geninfo_all_blocks=1 00:15:16.681 --rc geninfo_unexecuted_blocks=1 00:15:16.681 00:15:16.681 ' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:16.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.681 --rc genhtml_branch_coverage=1 00:15:16.681 --rc genhtml_function_coverage=1 00:15:16.681 --rc genhtml_legend=1 00:15:16.681 --rc geninfo_all_blocks=1 00:15:16.681 --rc geninfo_unexecuted_blocks=1 00:15:16.681 00:15:16.681 ' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:16.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.681 --rc genhtml_branch_coverage=1 00:15:16.681 --rc genhtml_function_coverage=1 00:15:16.681 --rc genhtml_legend=1 00:15:16.681 --rc geninfo_all_blocks=1 00:15:16.681 --rc geninfo_unexecuted_blocks=1 00:15:16.681 00:15:16.681 ' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:16.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.681 --rc genhtml_branch_coverage=1 00:15:16.681 --rc genhtml_function_coverage=1 00:15:16.681 --rc genhtml_legend=1 00:15:16.681 --rc geninfo_all_blocks=1 00:15:16.681 --rc geninfo_unexecuted_blocks=1 00:15:16.681 00:15:16.681 ' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.681 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:16.681 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:16.682 Cannot find device "nvmf_init_br" 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:15:16.682 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:16.682 Cannot find device "nvmf_init_br2" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:16.682 Cannot find device "nvmf_tgt_br" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.682 Cannot find device "nvmf_tgt_br2" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:16.682 Cannot find device "nvmf_init_br" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:16.682 Cannot find device "nvmf_init_br2" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:16.682 Cannot find device "nvmf_tgt_br" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:16.682 Cannot find device "nvmf_tgt_br2" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:16.682 Cannot find device "nvmf_br" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:16.682 Cannot find device "nvmf_init_if" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:16.682 Cannot find device "nvmf_init_if2" 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:16.682 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:16.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:15:16.941 00:15:16.941 --- 10.0.0.3 ping statistics --- 00:15:16.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.941 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:16.941 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:16.941 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:15:16.941 00:15:16.941 --- 10.0.0.4 ping statistics --- 00:15:16.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.941 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:16.941 00:15:16.941 --- 10.0.0.1 ping statistics --- 00:15:16.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.941 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:16.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:15:16.941 00:15:16.941 --- 10.0.0.2 ping statistics --- 00:15:16.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.941 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:16.941 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70075 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70075 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 70075 ']' 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.942 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.942 [2024-12-06 13:09:23.442606] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:16.942 [2024-12-06 13:09:23.442708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.200 [2024-12-06 13:09:23.598293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.200 [2024-12-06 13:09:23.638174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.200 [2024-12-06 13:09:23.638284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.200 [2024-12-06 13:09:23.638306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.200 [2024-12-06 13:09:23.638321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.200 [2024-12-06 13:09:23.638336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.200 [2024-12-06 13:09:23.640061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.200 [2024-12-06 13:09:23.640117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.200 [2024-12-06 13:09:23.640254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.200 [2024-12-06 13:09:23.640264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.458 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.458 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:15:17.458 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.458 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.458 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.458 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.458 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:17.716 [2024-12-06 13:09:24.089893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.716 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:17.974 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:17.974 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:18.540 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:18.541 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:18.798 13:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:18.798 13:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:19.055 13:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:19.055 13:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:19.313 13:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:19.570 13:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:19.570 13:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:19.827 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:19.828 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.084 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:20.084 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:20.650 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:20.908 13:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:20.908 13:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.166 13:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:21.166 13:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.425 13:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:21.684 [2024-12-06 13:09:28.068644] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:21.684 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:21.942 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:22.508 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:22.508 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:22.508 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:15:22.508 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.508 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:15:22.508 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:15:22.508 13:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:15:25.038 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:25.038 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:25.038 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.038 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:15:25.038 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.038 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:15:25.038 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:25.038 [global] 00:15:25.038 thread=1 00:15:25.038 invalidate=1 00:15:25.038 rw=write 00:15:25.038 time_based=1 00:15:25.038 runtime=1 00:15:25.038 ioengine=libaio 00:15:25.038 direct=1 00:15:25.038 bs=4096 00:15:25.038 iodepth=1 00:15:25.038 norandommap=0 00:15:25.038 numjobs=1 00:15:25.038 00:15:25.038 verify_dump=1 00:15:25.038 verify_backlog=512 00:15:25.038 verify_state_save=0 00:15:25.038 do_verify=1 00:15:25.038 verify=crc32c-intel 00:15:25.038 [job0] 00:15:25.038 filename=/dev/nvme0n1 00:15:25.038 [job1] 00:15:25.038 filename=/dev/nvme0n2 00:15:25.038 [job2] 00:15:25.038 filename=/dev/nvme0n3 00:15:25.038 [job3] 00:15:25.038 filename=/dev/nvme0n4 00:15:25.038 Could not set queue depth (nvme0n1) 00:15:25.038 Could not set queue depth (nvme0n2) 00:15:25.038 Could not set queue depth (nvme0n3) 00:15:25.038 Could not set queue depth (nvme0n4) 00:15:25.038 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.038 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.038 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.038 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.038 fio-3.35 00:15:25.038 Starting 4 threads 00:15:25.996 00:15:25.996 job0: (groupid=0, jobs=1): err= 0: pid=70365: Fri Dec 6 13:09:32 2024 00:15:25.996 read: IOPS=2786, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:15:25.996 slat (nsec): min=12918, max=85331, avg=17417.21, stdev=4923.04 00:15:25.996 clat (usec): min=138, max=2133, avg=166.22, stdev=44.00 00:15:25.996 lat (usec): min=151, max=2157, avg=183.64, stdev=44.65 00:15:25.996 clat percentiles (usec): 00:15:25.996 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:15:25.996 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:15:25.996 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 194], 00:15:25.996 | 99.00th=[ 253], 99.50th=[ 343], 99.90th=[ 502], 99.95th=[ 515], 00:15:25.996 | 99.99th=[ 2147] 00:15:25.996 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:25.996 slat (usec): min=19, max=144, avg=24.40, stdev= 4.99 00:15:25.997 clat (usec): min=105, max=471, avg=130.74, stdev=17.18 00:15:25.997 lat (usec): min=125, max=494, avg=155.13, stdev=18.55 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 120], 00:15:25.997 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:15:25.997 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 153], 95.00th=[ 163], 00:15:25.997 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 215], 99.95th=[ 412], 00:15:25.997 | 99.99th=[ 474] 00:15:25.997 bw ( KiB/s): min=12263, max=12263, per=30.30%, avg=12263.00, stdev= 0.00, samples=1 00:15:25.997 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:15:25.997 lat (usec) : 250=99.49%, 500=0.46%, 750=0.03% 00:15:25.997 lat (msec) : 4=0.02% 00:15:25.997 cpu : usr=2.20%, sys=9.70%, ctx=5862, majf=0, minf=11 00:15:25.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 issued rwts: total=2789,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:25.997 job1: (groupid=0, jobs=1): err= 0: pid=70366: Fri Dec 6 13:09:32 2024 00:15:25.997 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:25.997 slat (nsec): min=15057, max=81342, avg=20787.67, stdev=6510.77 00:15:25.997 clat (usec): min=146, max=719, avg=304.36, stdev=43.18 00:15:25.997 lat (usec): min=183, max=762, avg=325.14, stdev=44.57 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:15:25.997 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:15:25.997 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 338], 95.00th=[ 416], 00:15:25.997 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 627], 99.95th=[ 717], 00:15:25.997 | 99.99th=[ 717] 00:15:25.997 write: IOPS=1934, BW=7736KiB/s (7922kB/s)(7744KiB/1001msec); 0 zone resets 00:15:25.997 slat (usec): min=21, max=110, avg=30.79, stdev= 9.17 00:15:25.997 clat (usec): min=108, max=801, avg=223.66, stdev=26.56 00:15:25.997 lat (usec): min=144, max=856, avg=254.46, stdev=27.82 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 208], 00:15:25.997 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:15:25.997 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 258], 00:15:25.997 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 578], 99.95th=[ 799], 00:15:25.997 | 99.99th=[ 799] 00:15:25.997 bw ( KiB/s): min= 8192, max= 8192, per=20.24%, avg=8192.00, stdev= 0.00, samples=1 00:15:25.997 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:25.997 lat (usec) : 250=51.07%, 500=48.82%, 750=0.09%, 1000=0.03% 00:15:25.997 cpu : usr=1.90%, sys=7.00%, ctx=3484, majf=0, minf=13 00:15:25.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 issued rwts: total=1536,1936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:25.997 job2: (groupid=0, jobs=1): err= 0: pid=70367: Fri Dec 6 13:09:32 2024 00:15:25.997 read: IOPS=1552, BW=6210KiB/s (6359kB/s)(6216KiB/1001msec) 00:15:25.997 slat (nsec): min=14949, max=82533, avg=20664.84, stdev=4523.21 00:15:25.997 clat (usec): min=181, max=963, avg=289.61, stdev=36.17 00:15:25.997 lat (usec): min=201, max=997, avg=310.27, stdev=37.11 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 196], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 277], 00:15:25.997 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:15:25.997 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:15:25.997 | 99.00th=[ 416], 99.50th=[ 445], 99.90th=[ 914], 99.95th=[ 963], 00:15:25.997 | 99.99th=[ 963] 00:15:25.997 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:25.997 slat (usec): min=20, max=145, avg=29.22, stdev= 7.09 00:15:25.997 clat (usec): min=118, max=401, avg=219.69, stdev=25.64 00:15:25.997 lat (usec): min=144, max=430, avg=248.91, stdev=25.75 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 135], 5.00th=[ 165], 10.00th=[ 200], 20.00th=[ 208], 00:15:25.997 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:15:25.997 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 251], 00:15:25.997 | 99.00th=[ 285], 99.50th=[ 322], 99.90th=[ 392], 99.95th=[ 396], 00:15:25.997 | 99.99th=[ 404] 00:15:25.997 bw ( KiB/s): min= 8175, max= 8175, per=20.20%, avg=8175.00, stdev= 0.00, samples=1 00:15:25.997 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:15:25.997 lat (usec) : 250=55.33%, 500=44.61%, 1000=0.06% 00:15:25.997 cpu : usr=1.70%, sys=7.10%, ctx=3604, majf=0, minf=10 00:15:25.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 issued rwts: total=1554,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:25.997 job3: (groupid=0, jobs=1): err= 0: pid=70368: Fri Dec 6 13:09:32 2024 00:15:25.997 read: IOPS=2648, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1001msec) 00:15:25.997 slat (nsec): min=13189, max=45531, avg=15268.83, stdev=3112.74 00:15:25.997 clat (usec): min=153, max=477, avg=174.95, stdev=14.71 00:15:25.997 lat (usec): min=167, max=493, avg=190.22, stdev=15.23 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 165], 00:15:25.997 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:15:25.997 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:15:25.997 | 99.00th=[ 235], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 461], 00:15:25.997 | 99.99th=[ 478] 00:15:25.997 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:25.997 slat (nsec): min=19009, max=92671, avg=22714.38, stdev=4827.50 00:15:25.997 clat (usec): min=114, max=244, avg=135.57, stdev= 9.99 00:15:25.997 lat (usec): min=135, max=337, avg=158.28, stdev=11.84 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:15:25.997 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:15:25.997 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:15:25.997 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 190], 99.95th=[ 192], 00:15:25.997 | 99.99th=[ 245] 00:15:25.997 bw ( KiB/s): min=12288, max=12288, per=30.36%, avg=12288.00, stdev= 0.00, samples=1 00:15:25.997 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:25.997 lat (usec) : 250=99.72%, 500=0.28% 00:15:25.997 cpu : usr=2.10%, sys=8.40%, ctx=5723, majf=0, minf=6 00:15:25.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 issued rwts: total=2651,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:25.997 00:15:25.997 Run status group 0 (all jobs): 00:15:25.997 READ: bw=33.3MiB/s (34.9MB/s), 6138KiB/s-10.9MiB/s (6285kB/s-11.4MB/s), io=33.3MiB (34.9MB), run=1001-1001msec 00:15:25.997 WRITE: bw=39.5MiB/s (41.4MB/s), 7736KiB/s-12.0MiB/s (7922kB/s-12.6MB/s), io=39.6MiB (41.5MB), run=1001-1001msec 00:15:25.997 00:15:25.997 Disk stats (read/write): 00:15:25.997 nvme0n1: ios=2528/2560, merge=0/0, ticks=460/362, in_queue=822, util=89.38% 00:15:25.997 nvme0n2: ios=1518/1536, merge=0/0, ticks=470/361, in_queue=831, util=88.99% 00:15:25.997 nvme0n3: ios=1563/1541, merge=0/0, ticks=493/357, in_queue=850, util=89.84% 00:15:25.997 nvme0n4: ios=2391/2560, merge=0/0, ticks=424/369, in_queue=793, util=89.78% 00:15:25.997 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:25.997 [global] 00:15:25.997 thread=1 00:15:25.997 invalidate=1 00:15:25.997 rw=randwrite 00:15:25.997 time_based=1 00:15:25.997 runtime=1 00:15:25.997 ioengine=libaio 00:15:25.997 direct=1 00:15:25.997 bs=4096 00:15:25.997 iodepth=1 00:15:25.997 norandommap=0 00:15:25.997 numjobs=1 00:15:25.997 00:15:25.997 verify_dump=1 00:15:25.997 verify_backlog=512 00:15:25.997 verify_state_save=0 00:15:25.997 do_verify=1 00:15:25.997 verify=crc32c-intel 00:15:25.997 [job0] 00:15:25.997 filename=/dev/nvme0n1 00:15:25.997 [job1] 00:15:25.997 filename=/dev/nvme0n2 00:15:25.997 [job2] 00:15:25.997 filename=/dev/nvme0n3 00:15:25.997 [job3] 00:15:25.997 filename=/dev/nvme0n4 00:15:25.997 Could not set queue depth (nvme0n1) 00:15:25.997 Could not set queue depth (nvme0n2) 00:15:25.997 Could not set queue depth (nvme0n3) 00:15:25.997 Could not set queue depth (nvme0n4) 00:15:26.255 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.255 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.255 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.255 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.255 fio-3.35 00:15:26.255 Starting 4 threads 00:15:27.628 00:15:27.628 job0: (groupid=0, jobs=1): err= 0: pid=70421: Fri Dec 6 13:09:33 2024 00:15:27.628 read: IOPS=2960, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:15:27.628 slat (nsec): min=13345, max=40908, avg=16106.38, stdev=3158.80 00:15:27.628 clat (usec): min=133, max=2037, avg=163.22, stdev=48.30 00:15:27.628 lat (usec): min=147, max=2052, avg=179.32, stdev=48.37 00:15:27.628 clat percentiles (usec): 00:15:27.628 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:15:27.628 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:15:27.628 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:15:27.628 | 99.00th=[ 233], 99.50th=[ 245], 99.90th=[ 363], 99.95th=[ 1811], 00:15:27.628 | 99.99th=[ 2040] 00:15:27.628 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:27.629 slat (nsec): min=19810, max=85585, avg=23962.20, stdev=5306.53 00:15:27.629 clat (usec): min=103, max=266, avg=124.86, stdev= 9.54 00:15:27.629 lat (usec): min=124, max=352, avg=148.82, stdev=11.72 00:15:27.629 clat percentiles (usec): 00:15:27.629 | 1.00th=[ 108], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 118], 00:15:27.629 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 127], 00:15:27.629 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 141], 00:15:27.629 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 178], 00:15:27.629 | 99.99th=[ 269] 00:15:27.629 bw ( KiB/s): min=12288, max=12288, per=30.82%, avg=12288.00, stdev= 0.00, samples=1 00:15:27.629 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:27.629 lat (usec) : 250=99.88%, 500=0.08% 00:15:27.629 lat (msec) : 2=0.02%, 4=0.02% 00:15:27.629 cpu : usr=2.70%, sys=8.90%, ctx=6037, majf=0, minf=9 00:15:27.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 issued rwts: total=2963,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.629 job1: (groupid=0, jobs=1): err= 0: pid=70422: Fri Dec 6 13:09:33 2024 00:15:27.629 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:27.629 slat (nsec): min=13220, max=76657, avg=21578.42, stdev=6842.79 00:15:27.629 clat (usec): min=170, max=482, avg=295.10, stdev=36.43 00:15:27.629 lat (usec): min=185, max=514, avg=316.67, stdev=38.66 00:15:27.629 clat percentiles (usec): 00:15:27.629 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 273], 00:15:27.629 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:15:27.629 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 383], 00:15:27.629 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 465], 99.95th=[ 482], 00:15:27.629 | 99.99th=[ 482] 00:15:27.629 write: IOPS=2015, BW=8064KiB/s (8257kB/s)(8072KiB/1001msec); 0 zone resets 00:15:27.629 slat (usec): min=19, max=119, avg=28.94, stdev= 9.16 00:15:27.629 clat (usec): min=100, max=1199, avg=221.28, stdev=52.24 00:15:27.629 lat (usec): min=121, max=1223, avg=250.22, stdev=52.56 00:15:27.629 clat percentiles (usec): 00:15:27.629 | 1.00th=[ 119], 5.00th=[ 130], 10.00th=[ 153], 20.00th=[ 204], 00:15:27.629 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:15:27.629 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 285], 00:15:27.629 | 99.00th=[ 355], 99.50th=[ 400], 99.90th=[ 791], 99.95th=[ 906], 00:15:27.629 | 99.99th=[ 1205] 00:15:27.629 bw ( KiB/s): min= 8192, max= 8192, per=20.55%, avg=8192.00, stdev= 0.00, samples=1 00:15:27.629 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:27.629 lat (usec) : 250=50.00%, 500=49.89%, 750=0.03%, 1000=0.06% 00:15:27.629 lat (msec) : 2=0.03% 00:15:27.629 cpu : usr=2.30%, sys=6.60%, ctx=3556, majf=0, minf=11 00:15:27.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 issued rwts: total=1536,2018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.629 job2: (groupid=0, jobs=1): err= 0: pid=70423: Fri Dec 6 13:09:33 2024 00:15:27.629 read: IOPS=2566, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:15:27.629 slat (nsec): min=13408, max=42790, avg=16526.26, stdev=3025.21 00:15:27.629 clat (usec): min=153, max=277, avg=175.14, stdev= 9.94 00:15:27.629 lat (usec): min=169, max=291, avg=191.67, stdev=10.56 00:15:27.629 clat percentiles (usec): 00:15:27.629 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:15:27.629 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:15:27.629 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:15:27.629 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 217], 99.95th=[ 225], 00:15:27.629 | 99.99th=[ 277] 00:15:27.629 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:27.629 slat (usec): min=19, max=160, avg=23.64, stdev= 5.32 00:15:27.629 clat (usec): min=111, max=471, avg=137.89, stdev=13.36 00:15:27.629 lat (usec): min=134, max=492, avg=161.52, stdev=14.80 00:15:27.629 clat percentiles (usec): 00:15:27.629 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:15:27.629 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:15:27.629 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 157], 00:15:27.629 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 251], 99.95th=[ 351], 00:15:27.629 | 99.99th=[ 474] 00:15:27.629 bw ( KiB/s): min=12288, max=12288, per=30.82%, avg=12288.00, stdev= 0.00, samples=1 00:15:27.629 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:27.629 lat (usec) : 250=99.91%, 500=0.09% 00:15:27.629 cpu : usr=2.10%, sys=9.00%, ctx=5641, majf=0, minf=9 00:15:27.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 issued rwts: total=2569,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.629 job3: (groupid=0, jobs=1): err= 0: pid=70424: Fri Dec 6 13:09:33 2024 00:15:27.629 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:27.629 slat (nsec): min=13302, max=78088, avg=19497.97, stdev=7201.35 00:15:27.629 clat (usec): min=181, max=1049, avg=314.50, stdev=54.50 00:15:27.629 lat (usec): min=232, max=1104, avg=334.00, stdev=57.47 00:15:27.629 clat percentiles (usec): 00:15:27.629 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:15:27.629 | 30.00th=[ 289], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:15:27.629 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 400], 95.00th=[ 420], 00:15:27.629 | 99.00th=[ 482], 99.50th=[ 562], 99.90th=[ 668], 99.95th=[ 1057], 00:15:27.629 | 99.99th=[ 1057] 00:15:27.629 write: IOPS=1812, BW=7249KiB/s (7423kB/s)(7256KiB/1001msec); 0 zone resets 00:15:27.629 slat (nsec): min=19256, max=91291, avg=30499.49, stdev=9016.01 00:15:27.629 clat (usec): min=117, max=2487, avg=233.57, stdev=67.40 00:15:27.629 lat (usec): min=141, max=2512, avg=264.07, stdev=68.16 00:15:27.629 clat percentiles (usec): 00:15:27.629 | 1.00th=[ 149], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 210], 00:15:27.629 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:15:27.629 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 297], 00:15:27.629 | 99.00th=[ 388], 99.50th=[ 441], 99.90th=[ 701], 99.95th=[ 2474], 00:15:27.629 | 99.99th=[ 2474] 00:15:27.629 bw ( KiB/s): min= 8192, max= 8192, per=20.55%, avg=8192.00, stdev= 0.00, samples=1 00:15:27.629 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:27.629 lat (usec) : 250=44.21%, 500=55.16%, 750=0.57% 00:15:27.629 lat (msec) : 2=0.03%, 4=0.03% 00:15:27.629 cpu : usr=1.70%, sys=6.50%, ctx=3350, majf=0, minf=17 00:15:27.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.629 issued rwts: total=1536,1814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.629 00:15:27.629 Run status group 0 (all jobs): 00:15:27.629 READ: bw=33.6MiB/s (35.2MB/s), 6138KiB/s-11.6MiB/s (6285kB/s-12.1MB/s), io=33.6MiB (35.2MB), run=1001-1001msec 00:15:27.629 WRITE: bw=38.9MiB/s (40.8MB/s), 7249KiB/s-12.0MiB/s (7423kB/s-12.6MB/s), io=39.0MiB (40.9MB), run=1001-1001msec 00:15:27.629 00:15:27.629 Disk stats (read/write): 00:15:27.629 nvme0n1: ios=2610/2623, merge=0/0, ticks=458/348, in_queue=806, util=87.78% 00:15:27.629 nvme0n2: ios=1489/1536, merge=0/0, ticks=448/355, in_queue=803, util=88.01% 00:15:27.629 nvme0n3: ios=2280/2560, merge=0/0, ticks=413/381, in_queue=794, util=89.13% 00:15:27.629 nvme0n4: ios=1345/1536, merge=0/0, ticks=428/380, in_queue=808, util=89.27% 00:15:27.629 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:27.629 [global] 00:15:27.629 thread=1 00:15:27.629 invalidate=1 00:15:27.629 rw=write 00:15:27.629 time_based=1 00:15:27.629 runtime=1 00:15:27.629 ioengine=libaio 00:15:27.629 direct=1 00:15:27.629 bs=4096 00:15:27.629 iodepth=128 00:15:27.629 norandommap=0 00:15:27.629 numjobs=1 00:15:27.629 00:15:27.629 verify_dump=1 00:15:27.629 verify_backlog=512 00:15:27.629 verify_state_save=0 00:15:27.629 do_verify=1 00:15:27.629 verify=crc32c-intel 00:15:27.629 [job0] 00:15:27.629 filename=/dev/nvme0n1 00:15:27.629 [job1] 00:15:27.629 filename=/dev/nvme0n2 00:15:27.629 [job2] 00:15:27.629 filename=/dev/nvme0n3 00:15:27.629 [job3] 00:15:27.629 filename=/dev/nvme0n4 00:15:27.629 Could not set queue depth (nvme0n1) 00:15:27.629 Could not set queue depth (nvme0n2) 00:15:27.629 Could not set queue depth (nvme0n3) 00:15:27.629 Could not set queue depth (nvme0n4) 00:15:27.629 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:27.629 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:27.629 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:27.629 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:27.629 fio-3.35 00:15:27.629 Starting 4 threads 00:15:29.003 00:15:29.003 job0: (groupid=0, jobs=1): err= 0: pid=70490: Fri Dec 6 13:09:35 2024 00:15:29.003 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:15:29.003 slat (usec): min=6, max=12707, avg=165.19, stdev=910.60 00:15:29.003 clat (usec): min=8469, max=41321, avg=19814.38, stdev=5174.23 00:15:29.003 lat (usec): min=8479, max=41342, avg=19979.57, stdev=5273.81 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[11994], 5.00th=[13960], 10.00th=[14615], 20.00th=[15926], 00:15:29.003 | 30.00th=[16909], 40.00th=[17171], 50.00th=[18744], 60.00th=[19530], 00:15:29.003 | 70.00th=[20317], 80.00th=[23987], 90.00th=[27657], 95.00th=[29230], 00:15:29.003 | 99.00th=[37487], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:15:29.003 | 99.99th=[41157] 00:15:29.003 write: IOPS=2568, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1008msec); 0 zone resets 00:15:29.003 slat (usec): min=12, max=7911, avg=213.61, stdev=881.69 00:15:29.003 clat (usec): min=7739, max=78052, avg=29562.40, stdev=14642.13 00:15:29.003 lat (usec): min=7771, max=78079, avg=29776.02, stdev=14745.56 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[ 8356], 5.00th=[14484], 10.00th=[15008], 20.00th=[17695], 00:15:29.003 | 30.00th=[18744], 40.00th=[20841], 50.00th=[23987], 60.00th=[31065], 00:15:29.003 | 70.00th=[37487], 80.00th=[41681], 90.00th=[47449], 95.00th=[58983], 00:15:29.003 | 99.00th=[74974], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:15:29.003 | 99.99th=[78119] 00:15:29.003 bw ( KiB/s): min= 7944, max=12536, per=16.00%, avg=10240.00, stdev=3247.03, samples=2 00:15:29.003 iops : min= 1986, max= 3134, avg=2560.00, stdev=811.76, samples=2 00:15:29.003 lat (msec) : 10=0.72%, 20=53.58%, 50=40.86%, 100=4.84% 00:15:29.003 cpu : usr=2.58%, sys=8.44%, ctx=293, majf=0, minf=1 00:15:29.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:29.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.003 issued rwts: total=2560,2589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.003 job1: (groupid=0, jobs=1): err= 0: pid=70491: Fri Dec 6 13:09:35 2024 00:15:29.003 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:15:29.003 slat (usec): min=3, max=3497, avg=86.09, stdev=448.80 00:15:29.003 clat (usec): min=8491, max=15492, avg=11487.46, stdev=814.24 00:15:29.003 lat (usec): min=8514, max=15525, avg=11573.55, stdev=889.88 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10945], 20.00th=[11076], 00:15:29.003 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:15:29.003 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12387], 95.00th=[12911], 00:15:29.003 | 99.00th=[14222], 99.50th=[14484], 99.90th=[14877], 99.95th=[15008], 00:15:29.003 | 99.99th=[15533] 00:15:29.003 write: IOPS=5725, BW=22.4MiB/s (23.5MB/s)(22.4MiB/1001msec); 0 zone resets 00:15:29.003 slat (usec): min=11, max=3272, avg=82.10, stdev=376.54 00:15:29.003 clat (usec): min=728, max=15177, avg=10800.45, stdev=1325.23 00:15:29.003 lat (usec): min=747, max=15209, avg=10882.56, stdev=1311.29 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10159], 00:15:29.003 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:15:29.003 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:15:29.003 | 99.00th=[12911], 99.50th=[13435], 99.90th=[14091], 99.95th=[14353], 00:15:29.003 | 99.99th=[15139] 00:15:29.003 bw ( KiB/s): min=24175, max=24175, per=37.77%, avg=24175.00, stdev= 0.00, samples=1 00:15:29.003 iops : min= 6043, max= 6043, avg=6043.00, stdev= 0.00, samples=1 00:15:29.003 lat (usec) : 750=0.01%, 1000=0.08% 00:15:29.003 lat (msec) : 4=0.13%, 10=11.91%, 20=87.87% 00:15:29.003 cpu : usr=4.60%, sys=15.90%, ctx=425, majf=0, minf=1 00:15:29.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:29.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.003 issued rwts: total=5632,5731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.003 job2: (groupid=0, jobs=1): err= 0: pid=70492: Fri Dec 6 13:09:35 2024 00:15:29.003 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:15:29.003 slat (usec): min=6, max=16474, avg=197.01, stdev=1159.07 00:15:29.003 clat (usec): min=12795, max=54980, avg=25525.39, stdev=7830.04 00:15:29.003 lat (usec): min=14711, max=63290, avg=25722.40, stdev=7943.37 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[16581], 5.00th=[16909], 10.00th=[17433], 20.00th=[17957], 00:15:29.003 | 30.00th=[19268], 40.00th=[21890], 50.00th=[22414], 60.00th=[26346], 00:15:29.003 | 70.00th=[29230], 80.00th=[33817], 90.00th=[36963], 95.00th=[38536], 00:15:29.003 | 99.00th=[47449], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:15:29.003 | 99.99th=[54789] 00:15:29.003 write: IOPS=2693, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1010msec); 0 zone resets 00:15:29.003 slat (usec): min=11, max=8572, avg=173.60, stdev=816.03 00:15:29.003 clat (usec): min=8991, max=68251, avg=22740.47, stdev=8494.77 00:15:29.003 lat (usec): min=9043, max=68278, avg=22914.06, stdev=8564.36 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[13304], 5.00th=[14746], 10.00th=[15008], 20.00th=[15533], 00:15:29.003 | 30.00th=[17433], 40.00th=[19530], 50.00th=[22414], 60.00th=[23200], 00:15:29.003 | 70.00th=[23987], 80.00th=[26870], 90.00th=[29492], 95.00th=[40633], 00:15:29.003 | 99.00th=[59507], 99.50th=[61080], 99.90th=[68682], 99.95th=[68682], 00:15:29.003 | 99.99th=[68682] 00:15:29.003 bw ( KiB/s): min= 9424, max=11320, per=16.21%, avg=10372.00, stdev=1340.67, samples=2 00:15:29.003 iops : min= 2356, max= 2830, avg=2593.00, stdev=335.17, samples=2 00:15:29.003 lat (msec) : 10=0.13%, 20=37.12%, 50=61.19%, 100=1.55% 00:15:29.003 cpu : usr=2.68%, sys=8.13%, ctx=205, majf=0, minf=10 00:15:29.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:29.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.003 issued rwts: total=2560,2720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.003 job3: (groupid=0, jobs=1): err= 0: pid=70493: Fri Dec 6 13:09:35 2024 00:15:29.003 read: IOPS=4657, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1003msec) 00:15:29.003 slat (usec): min=6, max=3765, avg=101.48, stdev=470.22 00:15:29.003 clat (usec): min=2648, max=16597, avg=13225.04, stdev=1384.80 00:15:29.003 lat (usec): min=2676, max=17323, avg=13326.52, stdev=1325.92 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[ 6849], 5.00th=[10945], 10.00th=[11863], 20.00th=[13042], 00:15:29.003 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:15:29.003 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 00:15:29.003 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16581], 99.95th=[16581], 00:15:29.003 | 99.99th=[16581] 00:15:29.003 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:15:29.003 slat (usec): min=9, max=3724, avg=94.84, stdev=375.85 00:15:29.003 clat (usec): min=7224, max=16075, avg=12676.36, stdev=1281.18 00:15:29.003 lat (usec): min=7255, max=16104, avg=12771.19, stdev=1275.71 00:15:29.003 clat percentiles (usec): 00:15:29.003 | 1.00th=[10421], 5.00th=[10814], 10.00th=[10945], 20.00th=[11338], 00:15:29.003 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:15:29.003 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:15:29.003 | 99.00th=[15795], 99.50th=[15926], 99.90th=[16057], 99.95th=[16057], 00:15:29.003 | 99.99th=[16057] 00:15:29.003 bw ( KiB/s): min=19968, max=20480, per=31.60%, avg=20224.00, stdev=362.04, samples=2 00:15:29.003 iops : min= 4992, max= 5120, avg=5056.00, stdev=90.51, samples=2 00:15:29.003 lat (msec) : 4=0.33%, 10=0.49%, 20=99.18% 00:15:29.003 cpu : usr=4.59%, sys=14.37%, ctx=556, majf=0, minf=1 00:15:29.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:29.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.003 issued rwts: total=4671,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.003 00:15:29.003 Run status group 0 (all jobs): 00:15:29.003 READ: bw=59.6MiB/s (62.5MB/s), 9.90MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=60.2MiB (63.2MB), run=1001-1010msec 00:15:29.003 WRITE: bw=62.5MiB/s (65.5MB/s), 10.0MiB/s-22.4MiB/s (10.5MB/s-23.5MB/s), io=63.1MiB (66.2MB), run=1001-1010msec 00:15:29.003 00:15:29.003 Disk stats (read/write): 00:15:29.003 nvme0n1: ios=2098/2343, merge=0/0, ticks=19949/30715, in_queue=50664, util=88.26% 00:15:29.003 nvme0n2: ios=4724/5120, merge=0/0, ticks=16119/15369, in_queue=31488, util=88.66% 00:15:29.003 nvme0n3: ios=2127/2560, merge=0/0, ticks=17372/16346, in_queue=33718, util=89.15% 00:15:29.003 nvme0n4: ios=4096/4345, merge=0/0, ticks=12523/11955, in_queue=24478, util=89.71% 00:15:29.003 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:29.003 [global] 00:15:29.003 thread=1 00:15:29.003 invalidate=1 00:15:29.003 rw=randwrite 00:15:29.003 time_based=1 00:15:29.003 runtime=1 00:15:29.003 ioengine=libaio 00:15:29.003 direct=1 00:15:29.003 bs=4096 00:15:29.003 iodepth=128 00:15:29.003 norandommap=0 00:15:29.003 numjobs=1 00:15:29.003 00:15:29.003 verify_dump=1 00:15:29.003 verify_backlog=512 00:15:29.003 verify_state_save=0 00:15:29.003 do_verify=1 00:15:29.003 verify=crc32c-intel 00:15:29.003 [job0] 00:15:29.003 filename=/dev/nvme0n1 00:15:29.003 [job1] 00:15:29.003 filename=/dev/nvme0n2 00:15:29.003 [job2] 00:15:29.004 filename=/dev/nvme0n3 00:15:29.004 [job3] 00:15:29.004 filename=/dev/nvme0n4 00:15:29.004 Could not set queue depth (nvme0n1) 00:15:29.004 Could not set queue depth (nvme0n2) 00:15:29.004 Could not set queue depth (nvme0n3) 00:15:29.004 Could not set queue depth (nvme0n4) 00:15:29.004 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.004 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.004 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.004 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.004 fio-3.35 00:15:29.004 Starting 4 threads 00:15:30.378 00:15:30.378 job0: (groupid=0, jobs=1): err= 0: pid=70546: Fri Dec 6 13:09:36 2024 00:15:30.378 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:15:30.378 slat (usec): min=3, max=13847, avg=210.91, stdev=1077.79 00:15:30.378 clat (usec): min=3767, max=47134, avg=25732.73, stdev=5275.01 00:15:30.378 lat (usec): min=3781, max=47173, avg=25943.63, stdev=5373.47 00:15:30.378 clat percentiles (usec): 00:15:30.378 | 1.00th=[ 9634], 5.00th=[19792], 10.00th=[21365], 20.00th=[22676], 00:15:30.378 | 30.00th=[23462], 40.00th=[23987], 50.00th=[25297], 60.00th=[26084], 00:15:30.378 | 70.00th=[27132], 80.00th=[28181], 90.00th=[32637], 95.00th=[35914], 00:15:30.378 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[41681], 00:15:30.378 | 99.99th=[46924] 00:15:30.378 write: IOPS=2046, BW=8187KiB/s (8384kB/s)(8220KiB/1004msec); 0 zone resets 00:15:30.378 slat (usec): min=5, max=12250, avg=268.43, stdev=1173.73 00:15:30.378 clat (usec): min=1125, max=70599, avg=36023.00, stdev=13498.09 00:15:30.378 lat (usec): min=3401, max=70626, avg=36291.43, stdev=13623.86 00:15:30.378 clat percentiles (usec): 00:15:30.378 | 1.00th=[19006], 5.00th=[20841], 10.00th=[22414], 20.00th=[22938], 00:15:30.378 | 30.00th=[23725], 40.00th=[25822], 50.00th=[35390], 60.00th=[41681], 00:15:30.378 | 70.00th=[43254], 80.00th=[50594], 90.00th=[55837], 95.00th=[60031], 00:15:30.378 | 99.00th=[62653], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:15:30.378 | 99.99th=[70779] 00:15:30.378 bw ( KiB/s): min= 8192, max= 8192, per=16.06%, avg=8192.00, stdev= 0.00, samples=2 00:15:30.378 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:15:30.378 lat (msec) : 2=0.02%, 4=0.34%, 10=0.71%, 20=3.27%, 50=85.23% 00:15:30.378 lat (msec) : 100=10.43% 00:15:30.378 cpu : usr=3.09%, sys=5.18%, ctx=447, majf=0, minf=9 00:15:30.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:30.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.378 issued rwts: total=2048,2055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.378 job1: (groupid=0, jobs=1): err= 0: pid=70547: Fri Dec 6 13:09:36 2024 00:15:30.378 read: IOPS=6594, BW=25.8MiB/s (27.0MB/s)(25.8MiB/1002msec) 00:15:30.378 slat (usec): min=7, max=3495, avg=74.38, stdev=366.72 00:15:30.378 clat (usec): min=410, max=13591, avg=9700.69, stdev=1063.66 00:15:30.378 lat (usec): min=3404, max=13618, avg=9775.07, stdev=1086.18 00:15:30.378 clat percentiles (usec): 00:15:30.378 | 1.00th=[ 6849], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 9241], 00:15:30.378 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:15:30.378 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:15:30.378 | 99.00th=[12256], 99.50th=[12649], 99.90th=[12780], 99.95th=[13304], 00:15:30.378 | 99.99th=[13566] 00:15:30.378 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:15:30.378 slat (usec): min=10, max=2893, avg=69.03, stdev=262.23 00:15:30.378 clat (usec): min=6476, max=13296, avg=9407.54, stdev=955.48 00:15:30.378 lat (usec): min=6500, max=13316, avg=9476.57, stdev=941.39 00:15:30.378 clat percentiles (usec): 00:15:30.378 | 1.00th=[ 7046], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8979], 00:15:30.378 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:15:30.378 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10814], 00:15:30.378 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13304], 99.95th=[13304], 00:15:30.378 | 99.99th=[13304] 00:15:30.378 bw ( KiB/s): min=25896, max=27352, per=52.20%, avg=26624.00, stdev=1029.55, samples=2 00:15:30.378 iops : min= 6474, max= 6838, avg=6656.00, stdev=257.39, samples=2 00:15:30.378 lat (usec) : 500=0.01% 00:15:30.378 lat (msec) : 4=0.31%, 10=74.67%, 20=25.02% 00:15:30.378 cpu : usr=5.00%, sys=18.18%, ctx=733, majf=0, minf=11 00:15:30.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:30.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.378 issued rwts: total=6608,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.378 job2: (groupid=0, jobs=1): err= 0: pid=70548: Fri Dec 6 13:09:36 2024 00:15:30.378 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8160KiB/1005msec) 00:15:30.378 slat (usec): min=7, max=21967, avg=241.36, stdev=1537.24 00:15:30.378 clat (usec): min=2716, max=88971, avg=28188.68, stdev=17460.33 00:15:30.378 lat (usec): min=12140, max=89031, avg=28430.03, stdev=17640.95 00:15:30.378 clat percentiles (usec): 00:15:30.378 | 1.00th=[13435], 5.00th=[14746], 10.00th=[15139], 20.00th=[16057], 00:15:30.378 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17433], 60.00th=[20055], 00:15:30.378 | 70.00th=[34341], 80.00th=[46400], 90.00th=[60031], 95.00th=[67634], 00:15:30.378 | 99.00th=[70779], 99.50th=[78119], 99.90th=[84411], 99.95th=[86508], 00:15:30.378 | 99.99th=[88605] 00:15:30.378 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:15:30.379 slat (usec): min=12, max=21654, avg=239.39, stdev=1131.98 00:15:30.379 clat (usec): min=14505, max=88646, avg=32778.02, stdev=18228.60 00:15:30.379 lat (usec): min=14535, max=88671, avg=33017.41, stdev=18338.10 00:15:30.379 clat percentiles (usec): 00:15:30.379 | 1.00th=[17957], 5.00th=[19792], 10.00th=[20579], 20.00th=[21103], 00:15:30.379 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22676], 60.00th=[25297], 00:15:30.379 | 70.00th=[34866], 80.00th=[42730], 90.00th=[65274], 95.00th=[76022], 00:15:30.379 | 99.00th=[85459], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:15:30.379 | 99.99th=[88605] 00:15:30.379 bw ( KiB/s): min= 7600, max= 8784, per=16.06%, avg=8192.00, stdev=837.21, samples=2 00:15:30.379 iops : min= 1900, max= 2196, avg=2048.00, stdev=209.30, samples=2 00:15:30.379 lat (msec) : 4=0.02%, 20=32.71%, 50=50.83%, 100=16.44% 00:15:30.379 cpu : usr=2.19%, sys=6.97%, ctx=268, majf=0, minf=15 00:15:30.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:30.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.379 issued rwts: total=2040,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.379 job3: (groupid=0, jobs=1): err= 0: pid=70549: Fri Dec 6 13:09:36 2024 00:15:30.379 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:15:30.379 slat (usec): min=3, max=11679, avg=220.05, stdev=1088.00 00:15:30.379 clat (usec): min=7467, max=46811, avg=26282.56, stdev=4906.71 00:15:30.379 lat (usec): min=7481, max=46834, avg=26502.61, stdev=5015.78 00:15:30.379 clat percentiles (usec): 00:15:30.379 | 1.00th=[15139], 5.00th=[20317], 10.00th=[21365], 20.00th=[22938], 00:15:30.379 | 30.00th=[23725], 40.00th=[24773], 50.00th=[25297], 60.00th=[26870], 00:15:30.379 | 70.00th=[27395], 80.00th=[28443], 90.00th=[33817], 95.00th=[35390], 00:15:30.379 | 99.00th=[41157], 99.50th=[42206], 99.90th=[45876], 99.95th=[46400], 00:15:30.379 | 99.99th=[46924] 00:15:30.379 write: IOPS=2045, BW=8183KiB/s (8379kB/s)(8224KiB/1005msec); 0 zone resets 00:15:30.379 slat (usec): min=5, max=14175, avg=259.27, stdev=1139.03 00:15:30.379 clat (usec): min=2358, max=71243, avg=35508.62, stdev=13590.15 00:15:30.379 lat (usec): min=7211, max=71259, avg=35767.89, stdev=13716.83 00:15:30.379 clat percentiles (usec): 00:15:30.379 | 1.00th=[16057], 5.00th=[20579], 10.00th=[21627], 20.00th=[22938], 00:15:30.379 | 30.00th=[23987], 40.00th=[25822], 50.00th=[29754], 60.00th=[41681], 00:15:30.379 | 70.00th=[44303], 80.00th=[49021], 90.00th=[55837], 95.00th=[60031], 00:15:30.379 | 99.00th=[63701], 99.50th=[64226], 99.90th=[65274], 99.95th=[65274], 00:15:30.379 | 99.99th=[70779] 00:15:30.379 bw ( KiB/s): min= 8192, max= 8192, per=16.06%, avg=8192.00, stdev= 0.00, samples=2 00:15:30.379 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:15:30.379 lat (msec) : 4=0.02%, 10=0.37%, 20=2.90%, 50=87.65%, 100=9.06% 00:15:30.379 cpu : usr=3.19%, sys=4.78%, ctx=523, majf=0, minf=15 00:15:30.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:30.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.379 issued rwts: total=2048,2056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.379 00:15:30.379 Run status group 0 (all jobs): 00:15:30.379 READ: bw=49.5MiB/s (51.9MB/s), 8119KiB/s-25.8MiB/s (8314kB/s-27.0MB/s), io=49.8MiB (52.2MB), run=1002-1005msec 00:15:30.379 WRITE: bw=49.8MiB/s (52.2MB/s), 8151KiB/s-25.9MiB/s (8347kB/s-27.2MB/s), io=50.1MiB (52.5MB), run=1002-1005msec 00:15:30.379 00:15:30.379 Disk stats (read/write): 00:15:30.379 nvme0n1: ios=1586/1810, merge=0/0, ticks=19702/32293, in_queue=51995, util=87.96% 00:15:30.379 nvme0n2: ios=5681/5823, merge=0/0, ticks=16455/15456, in_queue=31911, util=88.88% 00:15:30.379 nvme0n3: ios=1720/2048, merge=0/0, ticks=13411/21153, in_queue=34564, util=89.70% 00:15:30.379 nvme0n4: ios=1536/1874, merge=0/0, ticks=19985/31799, in_queue=51784, util=89.33% 00:15:30.379 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:30.379 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70564 00:15:30.379 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:30.379 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:30.379 [global] 00:15:30.379 thread=1 00:15:30.379 invalidate=1 00:15:30.379 rw=read 00:15:30.379 time_based=1 00:15:30.379 runtime=10 00:15:30.379 ioengine=libaio 00:15:30.379 direct=1 00:15:30.379 bs=4096 00:15:30.379 iodepth=1 00:15:30.379 norandommap=1 00:15:30.379 numjobs=1 00:15:30.379 00:15:30.379 [job0] 00:15:30.379 filename=/dev/nvme0n1 00:15:30.379 [job1] 00:15:30.379 filename=/dev/nvme0n2 00:15:30.379 [job2] 00:15:30.379 filename=/dev/nvme0n3 00:15:30.379 [job3] 00:15:30.379 filename=/dev/nvme0n4 00:15:30.379 Could not set queue depth (nvme0n1) 00:15:30.379 Could not set queue depth (nvme0n2) 00:15:30.379 Could not set queue depth (nvme0n3) 00:15:30.379 Could not set queue depth (nvme0n4) 00:15:30.379 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.379 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.379 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.379 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.379 fio-3.35 00:15:30.379 Starting 4 threads 00:15:33.727 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:33.727 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34189312, buflen=4096 00:15:33.727 fio: pid=70613, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:33.727 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:33.727 fio: pid=70612, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:33.727 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=67784704, buflen=4096 00:15:33.727 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:33.727 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:33.984 fio: pid=70610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:33.984 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5890048, buflen=4096 00:15:33.984 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:33.984 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:34.549 fio: pid=70611, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:34.549 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=49504256, buflen=4096 00:15:34.549 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:34.549 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:34.549 00:15:34.549 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70610: Fri Dec 6 13:09:40 2024 00:15:34.549 read: IOPS=4961, BW=19.4MiB/s (20.3MB/s)(69.6MiB/3592msec) 00:15:34.549 slat (usec): min=9, max=14061, avg=17.27, stdev=151.45 00:15:34.549 clat (usec): min=130, max=5992, avg=182.75, stdev=74.47 00:15:34.549 lat (usec): min=146, max=14300, avg=200.03, stdev=169.48 00:15:34.549 clat percentiles (usec): 00:15:34.549 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:15:34.549 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:15:34.549 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 229], 95.00th=[ 289], 00:15:34.549 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 693], 99.95th=[ 1336], 00:15:34.549 | 99.99th=[ 3523] 00:15:34.549 bw ( KiB/s): min=17736, max=22056, per=37.92%, avg=20972.00, stdev=1639.13, samples=6 00:15:34.549 iops : min= 4434, max= 5512, avg=5242.67, stdev=409.52, samples=6 00:15:34.549 lat (usec) : 250=93.96%, 500=5.88%, 750=0.07%, 1000=0.01% 00:15:34.549 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01% 00:15:34.549 cpu : usr=1.31%, sys=6.60%, ctx=17840, majf=0, minf=1 00:15:34.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.549 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.549 issued rwts: total=17823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:34.549 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70611: Fri Dec 6 13:09:40 2024 00:15:34.549 read: IOPS=3049, BW=11.9MiB/s (12.5MB/s)(47.2MiB/3964msec) 00:15:34.549 slat (usec): min=8, max=11587, avg=30.12, stdev=220.35 00:15:34.549 clat (usec): min=129, max=70620, avg=295.39, stdev=647.54 00:15:34.549 lat (usec): min=143, max=70651, avg=325.51, stdev=684.20 00:15:34.549 clat percentiles (usec): 00:15:34.549 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 231], 00:15:34.549 | 30.00th=[ 277], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 326], 00:15:34.549 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 351], 00:15:34.549 | 99.00th=[ 404], 99.50th=[ 537], 99.90th=[ 1205], 99.95th=[ 2073], 00:15:34.549 | 99.99th=[ 3687] 00:15:34.549 bw ( KiB/s): min=10832, max=13166, per=20.86%, avg=11536.86, stdev=903.73, samples=7 00:15:34.549 iops : min= 2708, max= 3291, avg=2884.14, stdev=225.78, samples=7 00:15:34.549 lat (usec) : 250=25.22%, 500=74.20%, 750=0.36%, 1000=0.06% 00:15:34.549 lat (msec) : 2=0.10%, 4=0.05%, 100=0.01% 00:15:34.549 cpu : usr=1.26%, sys=6.84%, ctx=12103, majf=0, minf=2 00:15:34.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.549 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.549 issued rwts: total=12087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:34.549 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70612: Fri Dec 6 13:09:40 2024 00:15:34.549 read: IOPS=5076, BW=19.8MiB/s (20.8MB/s)(64.6MiB/3260msec) 00:15:34.549 slat (usec): min=9, max=15845, avg=17.39, stdev=152.56 00:15:34.549 clat (usec): min=139, max=3263, avg=178.14, stdev=52.47 00:15:34.549 lat (usec): min=155, max=16032, avg=195.53, stdev=161.46 00:15:34.549 clat percentiles (usec): 00:15:34.549 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:15:34.549 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:15:34.549 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 217], 95.00th=[ 241], 00:15:34.550 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 461], 99.95th=[ 1012], 00:15:34.550 | 99.99th=[ 2376] 00:15:34.550 bw ( KiB/s): min=19184, max=22064, per=38.04%, avg=21038.67, stdev=1074.16, samples=6 00:15:34.550 iops : min= 4796, max= 5516, avg=5259.67, stdev=268.54, samples=6 00:15:34.550 lat (usec) : 250=95.85%, 500=4.06%, 750=0.02%, 1000=0.01% 00:15:34.550 lat (msec) : 2=0.04%, 4=0.02% 00:15:34.550 cpu : usr=1.44%, sys=6.60%, ctx=16560, majf=0, minf=1 00:15:34.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.550 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.550 issued rwts: total=16550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:34.550 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70613: Fri Dec 6 13:09:40 2024 00:15:34.550 read: IOPS=2794, BW=10.9MiB/s (11.4MB/s)(32.6MiB/2987msec) 00:15:34.550 slat (nsec): min=13037, max=99789, avg=17710.03, stdev=5321.77 00:15:34.550 clat (usec): min=158, max=2277, avg=338.35, stdev=44.34 00:15:34.550 lat (usec): min=176, max=2298, avg=356.06, stdev=43.90 00:15:34.550 clat percentiles (usec): 00:15:34.550 | 1.00th=[ 260], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 322], 00:15:34.550 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:15:34.550 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 371], 00:15:34.550 | 99.00th=[ 412], 99.50th=[ 537], 99.90th=[ 799], 99.95th=[ 898], 00:15:34.550 | 99.99th=[ 2278] 00:15:34.550 bw ( KiB/s): min=10768, max=12272, per=20.22%, avg=11180.80, stdev=620.42, samples=5 00:15:34.550 iops : min= 2692, max= 3068, avg=2795.20, stdev=155.10, samples=5 00:15:34.550 lat (usec) : 250=0.57%, 500=98.78%, 750=0.52%, 1000=0.07% 00:15:34.550 lat (msec) : 2=0.04%, 4=0.01% 00:15:34.550 cpu : usr=0.87%, sys=4.09%, ctx=8348, majf=0, minf=2 00:15:34.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.550 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.550 issued rwts: total=8348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:34.550 00:15:34.550 Run status group 0 (all jobs): 00:15:34.550 READ: bw=54.0MiB/s (56.6MB/s), 10.9MiB/s-19.8MiB/s (11.4MB/s-20.8MB/s), io=214MiB (224MB), run=2987-3964msec 00:15:34.550 00:15:34.550 Disk stats (read/write): 00:15:34.550 nvme0n1: ios=16955/0, merge=0/0, ticks=3062/0, in_queue=3062, util=95.25% 00:15:34.550 nvme0n2: ios=11967/0, merge=0/0, ticks=3565/0, in_queue=3565, util=95.66% 00:15:34.550 nvme0n3: ios=16113/0, merge=0/0, ticks=2879/0, in_queue=2879, util=96.02% 00:15:34.550 nvme0n4: ios=8034/0, merge=0/0, ticks=2735/0, in_queue=2735, util=96.76% 00:15:34.807 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:34.807 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:35.064 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:35.064 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:35.323 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:35.323 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:35.582 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:35.582 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:36.147 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:36.147 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70564 00:15:36.147 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:36.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:36.148 nvmf hotplug test: fio failed as expected 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:36.148 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:36.406 rmmod nvme_tcp 00:15:36.406 rmmod nvme_fabrics 00:15:36.406 rmmod nvme_keyring 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70075 ']' 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70075 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 70075 ']' 00:15:36.406 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 70075 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70075 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.407 killing process with pid 70075 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70075' 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 70075 00:15:36.407 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 70075 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:36.676 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.676 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.936 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:36.936 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.936 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.936 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.936 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:15:36.936 00:15:36.936 real 0m20.478s 00:15:36.936 user 1m18.476s 00:15:36.937 sys 0m9.130s 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.937 ************************************ 00:15:36.937 END TEST nvmf_fio_target 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.937 ************************************ 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:36.937 ************************************ 00:15:36.937 START TEST nvmf_bdevio 00:15:36.937 ************************************ 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:36.937 * Looking for test storage... 00:15:36.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.937 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.196 --rc genhtml_branch_coverage=1 00:15:37.196 --rc genhtml_function_coverage=1 00:15:37.196 --rc genhtml_legend=1 00:15:37.196 --rc geninfo_all_blocks=1 00:15:37.196 --rc geninfo_unexecuted_blocks=1 00:15:37.196 00:15:37.196 ' 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.196 --rc genhtml_branch_coverage=1 00:15:37.196 --rc genhtml_function_coverage=1 00:15:37.196 --rc genhtml_legend=1 00:15:37.196 --rc geninfo_all_blocks=1 00:15:37.196 --rc geninfo_unexecuted_blocks=1 00:15:37.196 00:15:37.196 ' 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.196 --rc genhtml_branch_coverage=1 00:15:37.196 --rc genhtml_function_coverage=1 00:15:37.196 --rc genhtml_legend=1 00:15:37.196 --rc geninfo_all_blocks=1 00:15:37.196 --rc geninfo_unexecuted_blocks=1 00:15:37.196 00:15:37.196 ' 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.196 --rc genhtml_branch_coverage=1 00:15:37.196 --rc genhtml_function_coverage=1 00:15:37.196 --rc genhtml_legend=1 00:15:37.196 --rc geninfo_all_blocks=1 00:15:37.196 --rc geninfo_unexecuted_blocks=1 00:15:37.196 00:15:37.196 ' 00:15:37.196 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.197 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.197 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:37.198 Cannot find device "nvmf_init_br" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:37.198 Cannot find device "nvmf_init_br2" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:37.198 Cannot find device "nvmf_tgt_br" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.198 Cannot find device "nvmf_tgt_br2" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:37.198 Cannot find device "nvmf_init_br" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:37.198 Cannot find device "nvmf_init_br2" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:37.198 Cannot find device "nvmf_tgt_br" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:37.198 Cannot find device "nvmf_tgt_br2" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:37.198 Cannot find device "nvmf_br" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:37.198 Cannot find device "nvmf_init_if" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:37.198 Cannot find device "nvmf_init_if2" 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:37.198 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:37.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:37.457 00:15:37.457 --- 10.0.0.3 ping statistics --- 00:15:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.457 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:37.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:37.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:15:37.457 00:15:37.457 --- 10.0.0.4 ping statistics --- 00:15:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.457 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:37.457 00:15:37.457 --- 10.0.0.1 ping statistics --- 00:15:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.457 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:37.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:15:37.457 00:15:37.457 --- 10.0.0.2 ping statistics --- 00:15:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.457 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=71003 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 71003 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 71003 ']' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.457 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.457 [2024-12-06 13:09:43.936360] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:37.458 [2024-12-06 13:09:43.936463] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.717 [2024-12-06 13:09:44.088006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.717 [2024-12-06 13:09:44.120812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.717 [2024-12-06 13:09:44.120869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.717 [2024-12-06 13:09:44.120881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.717 [2024-12-06 13:09:44.120889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.717 [2024-12-06 13:09:44.120897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.717 [2024-12-06 13:09:44.121985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:37.717 [2024-12-06 13:09:44.122121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:37.717 [2024-12-06 13:09:44.122192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:37.717 [2024-12-06 13:09:44.122195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.717 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.975 [2024-12-06 13:09:44.248792] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.975 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.975 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:37.975 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.975 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.975 Malloc0 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.976 [2024-12-06 13:09:44.311052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:37.976 { 00:15:37.976 "params": { 00:15:37.976 "name": "Nvme$subsystem", 00:15:37.976 "trtype": "$TEST_TRANSPORT", 00:15:37.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.976 "adrfam": "ipv4", 00:15:37.976 "trsvcid": "$NVMF_PORT", 00:15:37.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.976 "hdgst": ${hdgst:-false}, 00:15:37.976 "ddgst": ${ddgst:-false} 00:15:37.976 }, 00:15:37.976 "method": "bdev_nvme_attach_controller" 00:15:37.976 } 00:15:37.976 EOF 00:15:37.976 )") 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:15:37.976 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:37.976 "params": { 00:15:37.976 "name": "Nvme1", 00:15:37.976 "trtype": "tcp", 00:15:37.976 "traddr": "10.0.0.3", 00:15:37.976 "adrfam": "ipv4", 00:15:37.976 "trsvcid": "4420", 00:15:37.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.976 "hdgst": false, 00:15:37.976 "ddgst": false 00:15:37.976 }, 00:15:37.976 "method": "bdev_nvme_attach_controller" 00:15:37.976 }' 00:15:37.976 [2024-12-06 13:09:44.373168] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:37.976 [2024-12-06 13:09:44.373258] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71038 ] 00:15:38.234 [2024-12-06 13:09:44.525699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.234 [2024-12-06 13:09:44.588603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.234 [2024-12-06 13:09:44.588731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.234 [2024-12-06 13:09:44.588752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.234 I/O targets: 00:15:38.234 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:38.234 00:15:38.234 00:15:38.234 CUnit - A unit testing framework for C - Version 2.1-3 00:15:38.234 http://cunit.sourceforge.net/ 00:15:38.234 00:15:38.234 00:15:38.234 Suite: bdevio tests on: Nvme1n1 00:15:38.491 Test: blockdev write read block ...passed 00:15:38.491 Test: blockdev write zeroes read block ...passed 00:15:38.491 Test: blockdev write zeroes read no split ...passed 00:15:38.491 Test: blockdev write zeroes read split ...passed 00:15:38.491 Test: blockdev write zeroes read split partial ...passed 00:15:38.491 Test: blockdev reset ...[2024-12-06 13:09:44.866030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:38.491 [2024-12-06 13:09:44.866363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239bf70 (9): Bad file descriptor 00:15:38.491 [2024-12-06 13:09:44.877459] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:38.491 passed 00:15:38.491 Test: blockdev write read 8 blocks ...passed 00:15:38.491 Test: blockdev write read size > 128k ...passed 00:15:38.491 Test: blockdev write read invalid size ...passed 00:15:38.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:38.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:38.491 Test: blockdev write read max offset ...passed 00:15:38.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:38.491 Test: blockdev writev readv 8 blocks ...passed 00:15:38.491 Test: blockdev writev readv 30 x 1block ...passed 00:15:38.759 Test: blockdev writev readv block ...passed 00:15:38.759 Test: blockdev writev readv size > 128k ...passed 00:15:38.759 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:38.760 Test: blockdev comparev and writev ...[2024-12-06 13:09:45.053448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.053635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.053665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.053676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.053984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.054003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.054021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.054031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.054306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.054328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.054346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.054356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.054780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.054810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.054829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:38.760 [2024-12-06 13:09:45.054839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:38.760 passed 00:15:38.760 Test: blockdev nvme passthru rw ...passed 00:15:38.760 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:09:45.138553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:38.760 [2024-12-06 13:09:45.138592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.138727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:38.760 [2024-12-06 13:09:45.138744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:38.760 [2024-12-06 13:09:45.138858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:38.760 [2024-12-06 13:09:45.138874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:38.760 passed 00:15:38.760 Test: blockdev nvme admin passthru ...[2024-12-06 13:09:45.139003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:38.760 [2024-12-06 13:09:45.139020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:38.760 passed 00:15:38.760 Test: blockdev copy ...passed 00:15:38.760 00:15:38.760 Run Summary: Type Total Ran Passed Failed Inactive 00:15:38.760 suites 1 1 n/a 0 0 00:15:38.760 tests 23 23 23 0 0 00:15:38.760 asserts 152 152 152 0 n/a 00:15:38.760 00:15:38.760 Elapsed time = 0.898 seconds 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:15:39.023 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:39.024 rmmod nvme_tcp 00:15:39.024 rmmod nvme_fabrics 00:15:39.024 rmmod nvme_keyring 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 71003 ']' 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 71003 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 71003 ']' 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 71003 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71003 00:15:39.024 killing process with pid 71003 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71003' 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 71003 00:15:39.024 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 71003 00:15:39.286 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:39.286 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:39.287 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:15:39.545 ************************************ 00:15:39.545 00:15:39.545 real 0m2.576s 00:15:39.545 user 0m7.810s 00:15:39.545 sys 0m0.765s 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 END TEST nvmf_bdevio 00:15:39.545 ************************************ 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:39.545 ************************************ 00:15:39.545 END TEST nvmf_target_core 00:15:39.545 ************************************ 00:15:39.545 00:15:39.545 real 3m36.721s 00:15:39.545 user 11m40.724s 00:15:39.545 sys 1m1.742s 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 13:09:45 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:39.545 13:09:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.545 13:09:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.545 13:09:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 ************************************ 00:15:39.545 START TEST nvmf_target_extra 00:15:39.545 ************************************ 00:15:39.545 13:09:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:39.545 * Looking for test storage... 00:15:39.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:39.545 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:39.545 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:15:39.545 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:39.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.804 --rc genhtml_branch_coverage=1 00:15:39.804 --rc genhtml_function_coverage=1 00:15:39.804 --rc genhtml_legend=1 00:15:39.804 --rc geninfo_all_blocks=1 00:15:39.804 --rc geninfo_unexecuted_blocks=1 00:15:39.804 00:15:39.804 ' 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:39.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.804 --rc genhtml_branch_coverage=1 00:15:39.804 --rc genhtml_function_coverage=1 00:15:39.804 --rc genhtml_legend=1 00:15:39.804 --rc geninfo_all_blocks=1 00:15:39.804 --rc geninfo_unexecuted_blocks=1 00:15:39.804 00:15:39.804 ' 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:39.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.804 --rc genhtml_branch_coverage=1 00:15:39.804 --rc genhtml_function_coverage=1 00:15:39.804 --rc genhtml_legend=1 00:15:39.804 --rc geninfo_all_blocks=1 00:15:39.804 --rc geninfo_unexecuted_blocks=1 00:15:39.804 00:15:39.804 ' 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:39.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.804 --rc genhtml_branch_coverage=1 00:15:39.804 --rc genhtml_function_coverage=1 00:15:39.804 --rc genhtml_legend=1 00:15:39.804 --rc geninfo_all_blocks=1 00:15:39.804 --rc geninfo_unexecuted_blocks=1 00:15:39.804 00:15:39.804 ' 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:39.804 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.805 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.805 ************************************ 00:15:39.805 START TEST nvmf_example 00:15:39.805 ************************************ 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:39.805 * Looking for test storage... 00:15:39.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:15:39.805 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:40.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.065 --rc genhtml_branch_coverage=1 00:15:40.065 --rc genhtml_function_coverage=1 00:15:40.065 --rc genhtml_legend=1 00:15:40.065 --rc geninfo_all_blocks=1 00:15:40.065 --rc geninfo_unexecuted_blocks=1 00:15:40.065 00:15:40.065 ' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:40.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.065 --rc genhtml_branch_coverage=1 00:15:40.065 --rc genhtml_function_coverage=1 00:15:40.065 --rc genhtml_legend=1 00:15:40.065 --rc geninfo_all_blocks=1 00:15:40.065 --rc geninfo_unexecuted_blocks=1 00:15:40.065 00:15:40.065 ' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:40.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.065 --rc genhtml_branch_coverage=1 00:15:40.065 --rc genhtml_function_coverage=1 00:15:40.065 --rc genhtml_legend=1 00:15:40.065 --rc geninfo_all_blocks=1 00:15:40.065 --rc geninfo_unexecuted_blocks=1 00:15:40.065 00:15:40.065 ' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:40.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.065 --rc genhtml_branch_coverage=1 00:15:40.065 --rc genhtml_function_coverage=1 00:15:40.065 --rc genhtml_legend=1 00:15:40.065 --rc geninfo_all_blocks=1 00:15:40.065 --rc geninfo_unexecuted_blocks=1 00:15:40.065 00:15:40.065 ' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.065 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:40.066 Cannot find device "nvmf_init_br" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:40.066 Cannot find device "nvmf_init_br2" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:40.066 Cannot find device "nvmf_tgt_br" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.066 Cannot find device "nvmf_tgt_br2" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:40.066 Cannot find device "nvmf_init_br" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:40.066 Cannot find device "nvmf_init_br2" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:40.066 Cannot find device "nvmf_tgt_br" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:40.066 Cannot find device "nvmf_tgt_br2" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:40.066 Cannot find device "nvmf_br" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:40.066 Cannot find device "nvmf_init_if" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:40.066 Cannot find device "nvmf_init_if2" 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.066 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:40.067 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.067 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.067 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:40.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:40.325 00:15:40.325 --- 10.0.0.3 ping statistics --- 00:15:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.325 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:40.325 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:40.325 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:15:40.325 00:15:40.325 --- 10.0.0.4 ping statistics --- 00:15:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.325 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:40.325 00:15:40.325 --- 10.0.0.1 ping statistics --- 00:15:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.325 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:40.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:40.325 00:15:40.325 --- 10.0.0.2 ping statistics --- 00:15:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.325 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:15:40.325 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71330 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71330 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 71330 ']' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.326 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:40.892 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:53.105 Initializing NVMe Controllers 00:15:53.105 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:53.105 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:53.105 Initialization complete. Launching workers. 00:15:53.105 ======================================================== 00:15:53.105 Latency(us) 00:15:53.105 Device Information : IOPS MiB/s Average min max 00:15:53.105 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14731.34 57.54 4343.96 720.06 28340.88 00:15:53.105 ======================================================== 00:15:53.105 Total : 14731.34 57.54 4343.96 720.06 28340.88 00:15:53.105 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.105 rmmod nvme_tcp 00:15:53.105 rmmod nvme_fabrics 00:15:53.105 rmmod nvme_keyring 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71330 ']' 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71330 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 71330 ']' 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 71330 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71330 00:15:53.105 killing process with pid 71330 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71330' 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 71330 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 71330 00:15:53.105 nvmf threads initialize successfully 00:15:53.105 bdev subsystem init successfully 00:15:53.105 created a nvmf target service 00:15:53.105 create targets's poll groups done 00:15:53.105 all subsystems of target started 00:15:53.105 nvmf target is running 00:15:53.105 all subsystems of target stopped 00:15:53.105 destroy targets's poll groups done 00:15:53.105 destroyed the nvmf target service 00:15:53.105 bdev subsystem finish successfully 00:15:53.105 nvmf threads destroy successfully 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:53.105 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:53.105 00:15:53.105 real 0m11.909s 00:15:53.105 user 0m41.623s 00:15:53.105 sys 0m1.884s 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:53.105 ************************************ 00:15:53.105 END TEST nvmf_example 00:15:53.105 ************************************ 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.105 ************************************ 00:15:53.105 START TEST nvmf_filesystem 00:15:53.105 ************************************ 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:53.105 * Looking for test storage... 00:15:53.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.105 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:53.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.106 --rc genhtml_branch_coverage=1 00:15:53.106 --rc genhtml_function_coverage=1 00:15:53.106 --rc genhtml_legend=1 00:15:53.106 --rc geninfo_all_blocks=1 00:15:53.106 --rc geninfo_unexecuted_blocks=1 00:15:53.106 00:15:53.106 ' 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:53.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.106 --rc genhtml_branch_coverage=1 00:15:53.106 --rc genhtml_function_coverage=1 00:15:53.106 --rc genhtml_legend=1 00:15:53.106 --rc geninfo_all_blocks=1 00:15:53.106 --rc geninfo_unexecuted_blocks=1 00:15:53.106 00:15:53.106 ' 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:53.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.106 --rc genhtml_branch_coverage=1 00:15:53.106 --rc genhtml_function_coverage=1 00:15:53.106 --rc genhtml_legend=1 00:15:53.106 --rc geninfo_all_blocks=1 00:15:53.106 --rc geninfo_unexecuted_blocks=1 00:15:53.106 00:15:53.106 ' 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:53.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.106 --rc genhtml_branch_coverage=1 00:15:53.106 --rc genhtml_function_coverage=1 00:15:53.106 --rc genhtml_legend=1 00:15:53.106 --rc geninfo_all_blocks=1 00:15:53.106 --rc geninfo_unexecuted_blocks=1 00:15:53.106 00:15:53.106 ' 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:53.106 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:53.107 #define SPDK_CONFIG_H 00:15:53.107 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:53.107 #define SPDK_CONFIG_APPS 1 00:15:53.107 #define SPDK_CONFIG_ARCH native 00:15:53.107 #undef SPDK_CONFIG_ASAN 00:15:53.107 #define SPDK_CONFIG_AVAHI 1 00:15:53.107 #undef SPDK_CONFIG_CET 00:15:53.107 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:53.107 #define SPDK_CONFIG_COVERAGE 1 00:15:53.107 #define SPDK_CONFIG_CROSS_PREFIX 00:15:53.107 #undef SPDK_CONFIG_CRYPTO 00:15:53.107 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:53.107 #undef SPDK_CONFIG_CUSTOMOCF 00:15:53.107 #undef SPDK_CONFIG_DAOS 00:15:53.107 #define SPDK_CONFIG_DAOS_DIR 00:15:53.107 #define SPDK_CONFIG_DEBUG 1 00:15:53.107 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:53.107 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:53.107 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:53.107 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:53.107 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:53.107 #undef SPDK_CONFIG_DPDK_UADK 00:15:53.107 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:53.107 #define SPDK_CONFIG_EXAMPLES 1 00:15:53.107 #undef SPDK_CONFIG_FC 00:15:53.107 #define SPDK_CONFIG_FC_PATH 00:15:53.107 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:53.107 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:53.107 #define SPDK_CONFIG_FSDEV 1 00:15:53.107 #undef SPDK_CONFIG_FUSE 00:15:53.107 #undef SPDK_CONFIG_FUZZER 00:15:53.107 #define SPDK_CONFIG_FUZZER_LIB 00:15:53.107 #define SPDK_CONFIG_GOLANG 1 00:15:53.107 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:53.107 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:53.107 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:53.107 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:53.107 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:53.107 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:53.107 #undef SPDK_CONFIG_HAVE_LZ4 00:15:53.107 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:53.107 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:53.107 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:53.107 #define SPDK_CONFIG_IDXD 1 00:15:53.107 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:53.107 #undef SPDK_CONFIG_IPSEC_MB 00:15:53.107 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:53.107 #define SPDK_CONFIG_ISAL 1 00:15:53.107 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:53.107 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:53.107 #define SPDK_CONFIG_LIBDIR 00:15:53.107 #undef SPDK_CONFIG_LTO 00:15:53.107 #define SPDK_CONFIG_MAX_LCORES 128 00:15:53.107 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:53.107 #define SPDK_CONFIG_NVME_CUSE 1 00:15:53.107 #undef SPDK_CONFIG_OCF 00:15:53.107 #define SPDK_CONFIG_OCF_PATH 00:15:53.107 #define SPDK_CONFIG_OPENSSL_PATH 00:15:53.107 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:53.107 #define SPDK_CONFIG_PGO_DIR 00:15:53.107 #undef SPDK_CONFIG_PGO_USE 00:15:53.107 #define SPDK_CONFIG_PREFIX /usr/local 00:15:53.107 #undef SPDK_CONFIG_RAID5F 00:15:53.107 #undef SPDK_CONFIG_RBD 00:15:53.107 #define SPDK_CONFIG_RDMA 1 00:15:53.107 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:53.107 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:53.107 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:53.107 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:53.107 #define SPDK_CONFIG_SHARED 1 00:15:53.107 #undef SPDK_CONFIG_SMA 00:15:53.107 #define SPDK_CONFIG_TESTS 1 00:15:53.107 #undef SPDK_CONFIG_TSAN 00:15:53.107 #define SPDK_CONFIG_UBLK 1 00:15:53.107 #define SPDK_CONFIG_UBSAN 1 00:15:53.107 #undef SPDK_CONFIG_UNIT_TESTS 00:15:53.107 #undef SPDK_CONFIG_URING 00:15:53.107 #define SPDK_CONFIG_URING_PATH 00:15:53.107 #undef SPDK_CONFIG_URING_ZNS 00:15:53.107 #define SPDK_CONFIG_USDT 1 00:15:53.107 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:53.107 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:53.107 #undef SPDK_CONFIG_VFIO_USER 00:15:53.107 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:53.107 #define SPDK_CONFIG_VHOST 1 00:15:53.107 #define SPDK_CONFIG_VIRTIO 1 00:15:53.107 #undef SPDK_CONFIG_VTUNE 00:15:53.107 #define SPDK_CONFIG_VTUNE_DIR 00:15:53.107 #define SPDK_CONFIG_WERROR 1 00:15:53.107 #define SPDK_CONFIG_WPDK_DIR 00:15:53.107 #undef SPDK_CONFIG_XNVME 00:15:53.107 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.107 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:53.108 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:53.109 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71592 ]] 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71592 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.up0jLv 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.up0jLv/tests/target /tmp/spdk.up0jLv 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979430912 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589405696 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256390144 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979430912 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589405696 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:53.110 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=95715823616 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=3986956288 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:53.111 * Looking for test storage... 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13979430912 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.111 --rc genhtml_branch_coverage=1 00:15:53.111 --rc genhtml_function_coverage=1 00:15:53.111 --rc genhtml_legend=1 00:15:53.111 --rc geninfo_all_blocks=1 00:15:53.111 --rc geninfo_unexecuted_blocks=1 00:15:53.111 00:15:53.111 ' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.111 --rc genhtml_branch_coverage=1 00:15:53.111 --rc genhtml_function_coverage=1 00:15:53.111 --rc genhtml_legend=1 00:15:53.111 --rc geninfo_all_blocks=1 00:15:53.111 --rc geninfo_unexecuted_blocks=1 00:15:53.111 00:15:53.111 ' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.111 --rc genhtml_branch_coverage=1 00:15:53.111 --rc genhtml_function_coverage=1 00:15:53.111 --rc genhtml_legend=1 00:15:53.111 --rc geninfo_all_blocks=1 00:15:53.111 --rc geninfo_unexecuted_blocks=1 00:15:53.111 00:15:53.111 ' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.111 --rc genhtml_branch_coverage=1 00:15:53.111 --rc genhtml_function_coverage=1 00:15:53.111 --rc genhtml_legend=1 00:15:53.111 --rc geninfo_all_blocks=1 00:15:53.111 --rc geninfo_unexecuted_blocks=1 00:15:53.111 00:15:53.111 ' 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.111 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.112 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:53.112 Cannot find device "nvmf_init_br" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:53.112 Cannot find device "nvmf_init_br2" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:53.112 Cannot find device "nvmf_tgt_br" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.112 Cannot find device "nvmf_tgt_br2" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:53.112 Cannot find device "nvmf_init_br" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:53.112 Cannot find device "nvmf_init_br2" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:53.112 Cannot find device "nvmf_tgt_br" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:53.112 Cannot find device "nvmf_tgt_br2" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:53.112 Cannot find device "nvmf_br" 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:15:53.112 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:53.112 Cannot find device "nvmf_init_if" 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:53.113 Cannot find device "nvmf_init_if2" 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:53.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:15:53.113 00:15:53.113 --- 10.0.0.3 ping statistics --- 00:15:53.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.113 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:53.113 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:53.113 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:15:53.113 00:15:53.113 --- 10.0.0.4 ping statistics --- 00:15:53.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.113 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:53.113 00:15:53.113 --- 10.0.0.1 ping statistics --- 00:15:53.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.113 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:53.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:53.113 00:15:53.113 --- 10.0.0.2 ping statistics --- 00:15:53.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.113 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:53.113 ************************************ 00:15:53.113 START TEST nvmf_filesystem_no_in_capsule 00:15:53.113 ************************************ 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71777 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71777 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71777 ']' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.113 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.113 [2024-12-06 13:09:59.072782] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:53.113 [2024-12-06 13:09:59.072931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.113 [2024-12-06 13:09:59.241074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.113 [2024-12-06 13:09:59.291543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.113 [2024-12-06 13:09:59.291621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.113 [2024-12-06 13:09:59.291658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.113 [2024-12-06 13:09:59.291675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.113 [2024-12-06 13:09:59.291687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.113 [2024-12-06 13:09:59.292788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.113 [2024-12-06 13:09:59.292978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.114 [2024-12-06 13:09:59.293079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.114 [2024-12-06 13:09:59.293302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.681 [2024-12-06 13:10:00.082211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.681 Malloc1 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.681 [2024-12-06 13:10:00.189930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:53.681 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.682 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:53.940 { 00:15:53.940 "aliases": [ 00:15:53.940 "28ab3265-b1bb-4e04-a904-68086fbf7e5e" 00:15:53.940 ], 00:15:53.940 "assigned_rate_limits": { 00:15:53.940 "r_mbytes_per_sec": 0, 00:15:53.940 "rw_ios_per_sec": 0, 00:15:53.940 "rw_mbytes_per_sec": 0, 00:15:53.940 "w_mbytes_per_sec": 0 00:15:53.940 }, 00:15:53.940 "block_size": 512, 00:15:53.940 "claim_type": "exclusive_write", 00:15:53.940 "claimed": true, 00:15:53.940 "driver_specific": {}, 00:15:53.940 "memory_domains": [ 00:15:53.940 { 00:15:53.940 "dma_device_id": "system", 00:15:53.940 "dma_device_type": 1 00:15:53.940 }, 00:15:53.940 { 00:15:53.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.940 "dma_device_type": 2 00:15:53.940 } 00:15:53.940 ], 00:15:53.940 "name": "Malloc1", 00:15:53.940 "num_blocks": 1048576, 00:15:53.940 "product_name": "Malloc disk", 00:15:53.940 "supported_io_types": { 00:15:53.940 "abort": true, 00:15:53.940 "compare": false, 00:15:53.940 "compare_and_write": false, 00:15:53.940 "copy": true, 00:15:53.940 "flush": true, 00:15:53.940 "get_zone_info": false, 00:15:53.940 "nvme_admin": false, 00:15:53.940 "nvme_io": false, 00:15:53.940 "nvme_io_md": false, 00:15:53.940 "nvme_iov_md": false, 00:15:53.940 "read": true, 00:15:53.940 "reset": true, 00:15:53.940 "seek_data": false, 00:15:53.940 "seek_hole": false, 00:15:53.940 "unmap": true, 00:15:53.940 "write": true, 00:15:53.940 "write_zeroes": true, 00:15:53.940 "zcopy": true, 00:15:53.940 "zone_append": false, 00:15:53.940 "zone_management": false 00:15:53.940 }, 00:15:53.940 "uuid": "28ab3265-b1bb-4e04-a904-68086fbf7e5e", 00:15:53.940 "zoned": false 00:15:53.940 } 00:15:53.940 ]' 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:53.940 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:54.199 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.199 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:54.199 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.199 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:54.199 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:56.096 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:56.354 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.308 ************************************ 00:15:57.308 START TEST filesystem_ext4 00:15:57.308 ************************************ 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:57.308 mke2fs 1.47.0 (5-Feb-2023) 00:15:57.308 Discarding device blocks: 0/522240 done 00:15:57.308 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:57.308 Filesystem UUID: 3d59dff2-be58-4733-929d-c7f3a4fa8fa1 00:15:57.308 Superblock backups stored on blocks: 00:15:57.308 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:57.308 00:15:57.308 Allocating group tables: 0/64 done 00:15:57.308 Writing inode tables: 0/64 done 00:15:57.308 Creating journal (8192 blocks): done 00:15:57.308 Writing superblocks and filesystem accounting information: 0/64 done 00:15:57.308 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:57.308 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:02.568 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71777 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:02.827 00:16:02.827 real 0m5.505s 00:16:02.827 user 0m0.021s 00:16:02.827 sys 0m0.060s 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.827 ************************************ 00:16:02.827 END TEST filesystem_ext4 00:16:02.827 ************************************ 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.827 ************************************ 00:16:02.827 START TEST filesystem_btrfs 00:16:02.827 ************************************ 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:02.827 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:02.828 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:02.828 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:02.828 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:02.828 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:02.828 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:02.828 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:02.828 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:03.086 btrfs-progs v6.8.1 00:16:03.086 See https://btrfs.readthedocs.io for more information. 00:16:03.086 00:16:03.086 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:03.086 NOTE: several default settings have changed in version 5.15, please make sure 00:16:03.086 this does not affect your deployments: 00:16:03.086 - DUP for metadata (-m dup) 00:16:03.086 - enabled no-holes (-O no-holes) 00:16:03.086 - enabled free-space-tree (-R free-space-tree) 00:16:03.086 00:16:03.086 Label: (null) 00:16:03.086 UUID: baeb9698-c7df-4147-a2e5-690aed7aa61e 00:16:03.086 Node size: 16384 00:16:03.086 Sector size: 4096 (CPU page size: 4096) 00:16:03.086 Filesystem size: 510.00MiB 00:16:03.086 Block group profiles: 00:16:03.086 Data: single 8.00MiB 00:16:03.086 Metadata: DUP 32.00MiB 00:16:03.086 System: DUP 8.00MiB 00:16:03.086 SSD detected: yes 00:16:03.086 Zoned device: no 00:16:03.086 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:03.086 Checksum: crc32c 00:16:03.086 Number of devices: 1 00:16:03.086 Devices: 00:16:03.086 ID SIZE PATH 00:16:03.086 1 510.00MiB /dev/nvme0n1p1 00:16:03.086 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71777 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:03.086 ************************************ 00:16:03.086 END TEST filesystem_btrfs 00:16:03.086 ************************************ 00:16:03.086 00:16:03.086 real 0m0.256s 00:16:03.086 user 0m0.027s 00:16:03.086 sys 0m0.048s 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:03.086 ************************************ 00:16:03.086 START TEST filesystem_xfs 00:16:03.086 ************************************ 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:03.086 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:03.086 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:03.086 = sectsz=512 attr=2, projid32bit=1 00:16:03.086 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:03.086 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:03.086 data = bsize=4096 blocks=130560, imaxpct=25 00:16:03.086 = sunit=0 swidth=0 blks 00:16:03.086 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:03.086 log =internal log bsize=4096 blocks=16384, version=2 00:16:03.086 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:03.086 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:04.021 Discarding blocks...Done. 00:16:04.021 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:04.021 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71777 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:06.547 ************************************ 00:16:06.547 END TEST filesystem_xfs 00:16:06.547 ************************************ 00:16:06.547 00:16:06.547 real 0m3.073s 00:16:06.547 user 0m0.012s 00:16:06.547 sys 0m0.060s 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:06.547 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71777 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71777 ']' 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71777 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71777 00:16:06.548 killing process with pid 71777 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71777' 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71777 00:16:06.548 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71777 00:16:06.548 ************************************ 00:16:06.548 END TEST nvmf_filesystem_no_in_capsule 00:16:06.548 ************************************ 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:06.548 00:16:06.548 real 0m14.056s 00:16:06.548 user 0m53.794s 00:16:06.548 sys 0m1.935s 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.548 ************************************ 00:16:06.548 START TEST nvmf_filesystem_in_capsule 00:16:06.548 ************************************ 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.548 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72144 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72144 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 72144 ']' 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.806 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:06.806 [2024-12-06 13:10:13.138768] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:06.806 [2024-12-06 13:10:13.138884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.806 [2024-12-06 13:10:13.290036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.806 [2024-12-06 13:10:13.331391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.806 [2024-12-06 13:10:13.331454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.806 [2024-12-06 13:10:13.331474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.806 [2024-12-06 13:10:13.331488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.806 [2024-12-06 13:10:13.331498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.065 [2024-12-06 13:10:13.332390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.065 [2024-12-06 13:10:13.332517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.065 [2024-12-06 13:10:13.332554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.065 [2024-12-06 13:10:13.332556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.065 [2024-12-06 13:10:13.456196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.065 Malloc1 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.065 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.066 [2024-12-06 13:10:13.571694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.066 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:07.324 { 00:16:07.324 "aliases": [ 00:16:07.324 "2866a167-3575-43ae-a12d-9ed82dfc568d" 00:16:07.324 ], 00:16:07.324 "assigned_rate_limits": { 00:16:07.324 "r_mbytes_per_sec": 0, 00:16:07.324 "rw_ios_per_sec": 0, 00:16:07.324 "rw_mbytes_per_sec": 0, 00:16:07.324 "w_mbytes_per_sec": 0 00:16:07.324 }, 00:16:07.324 "block_size": 512, 00:16:07.324 "claim_type": "exclusive_write", 00:16:07.324 "claimed": true, 00:16:07.324 "driver_specific": {}, 00:16:07.324 "memory_domains": [ 00:16:07.324 { 00:16:07.324 "dma_device_id": "system", 00:16:07.324 "dma_device_type": 1 00:16:07.324 }, 00:16:07.324 { 00:16:07.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.324 "dma_device_type": 2 00:16:07.324 } 00:16:07.324 ], 00:16:07.324 "name": "Malloc1", 00:16:07.324 "num_blocks": 1048576, 00:16:07.324 "product_name": "Malloc disk", 00:16:07.324 "supported_io_types": { 00:16:07.324 "abort": true, 00:16:07.324 "compare": false, 00:16:07.324 "compare_and_write": false, 00:16:07.324 "copy": true, 00:16:07.324 "flush": true, 00:16:07.324 "get_zone_info": false, 00:16:07.324 "nvme_admin": false, 00:16:07.324 "nvme_io": false, 00:16:07.324 "nvme_io_md": false, 00:16:07.324 "nvme_iov_md": false, 00:16:07.324 "read": true, 00:16:07.324 "reset": true, 00:16:07.324 "seek_data": false, 00:16:07.324 "seek_hole": false, 00:16:07.324 "unmap": true, 00:16:07.324 "write": true, 00:16:07.324 "write_zeroes": true, 00:16:07.324 "zcopy": true, 00:16:07.324 "zone_append": false, 00:16:07.324 "zone_management": false 00:16:07.324 }, 00:16:07.324 "uuid": "2866a167-3575-43ae-a12d-9ed82dfc568d", 00:16:07.324 "zoned": false 00:16:07.324 } 00:16:07.324 ]' 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:07.324 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:07.583 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:07.583 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:07.583 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.583 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:07.583 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:09.482 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:09.739 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:10.692 ************************************ 00:16:10.692 START TEST filesystem_in_capsule_ext4 00:16:10.692 ************************************ 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:10.692 mke2fs 1.47.0 (5-Feb-2023) 00:16:10.692 Discarding device blocks: 0/522240 done 00:16:10.692 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:10.692 Filesystem UUID: 8ceb32ca-110e-4ef2-888a-f9100d918aab 00:16:10.692 Superblock backups stored on blocks: 00:16:10.692 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:10.692 00:16:10.692 Allocating group tables: 0/64 done 00:16:10.692 Writing inode tables: 0/64 done 00:16:10.692 Creating journal (8192 blocks): done 00:16:10.692 Writing superblocks and filesystem accounting information: 0/64 done 00:16:10.692 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:10.692 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72144 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:17.270 00:16:17.270 real 0m5.574s 00:16:17.270 user 0m0.022s 00:16:17.270 sys 0m0.060s 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:17.270 ************************************ 00:16:17.270 END TEST filesystem_in_capsule_ext4 00:16:17.270 ************************************ 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.270 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.271 ************************************ 00:16:17.271 START TEST filesystem_in_capsule_btrfs 00:16:17.271 ************************************ 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:17.271 btrfs-progs v6.8.1 00:16:17.271 See https://btrfs.readthedocs.io for more information. 00:16:17.271 00:16:17.271 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:17.271 NOTE: several default settings have changed in version 5.15, please make sure 00:16:17.271 this does not affect your deployments: 00:16:17.271 - DUP for metadata (-m dup) 00:16:17.271 - enabled no-holes (-O no-holes) 00:16:17.271 - enabled free-space-tree (-R free-space-tree) 00:16:17.271 00:16:17.271 Label: (null) 00:16:17.271 UUID: 32718917-fa91-4e74-8290-aa2018d15290 00:16:17.271 Node size: 16384 00:16:17.271 Sector size: 4096 (CPU page size: 4096) 00:16:17.271 Filesystem size: 510.00MiB 00:16:17.271 Block group profiles: 00:16:17.271 Data: single 8.00MiB 00:16:17.271 Metadata: DUP 32.00MiB 00:16:17.271 System: DUP 8.00MiB 00:16:17.271 SSD detected: yes 00:16:17.271 Zoned device: no 00:16:17.271 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:17.271 Checksum: crc32c 00:16:17.271 Number of devices: 1 00:16:17.271 Devices: 00:16:17.271 ID SIZE PATH 00:16:17.271 1 510.00MiB /dev/nvme0n1p1 00:16:17.271 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72144 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:17.271 00:16:17.271 real 0m0.224s 00:16:17.271 user 0m0.025s 00:16:17.271 sys 0m0.062s 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:17.271 ************************************ 00:16:17.271 END TEST filesystem_in_capsule_btrfs 00:16:17.271 ************************************ 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.271 ************************************ 00:16:17.271 START TEST filesystem_in_capsule_xfs 00:16:17.271 ************************************ 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:17.271 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:17.271 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:17.271 = sectsz=512 attr=2, projid32bit=1 00:16:17.271 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:17.271 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:17.271 data = bsize=4096 blocks=130560, imaxpct=25 00:16:17.271 = sunit=0 swidth=0 blks 00:16:17.271 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:17.271 log =internal log bsize=4096 blocks=16384, version=2 00:16:17.271 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:17.271 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:17.271 Discarding blocks...Done. 00:16:17.271 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:17.271 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72144 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:19.176 00:16:19.176 real 0m2.561s 00:16:19.176 user 0m0.022s 00:16:19.176 sys 0m0.051s 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:19.176 ************************************ 00:16:19.176 END TEST filesystem_in_capsule_xfs 00:16:19.176 ************************************ 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72144 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 72144 ']' 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 72144 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72144 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.176 killing process with pid 72144 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72144' 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 72144 00:16:19.176 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 72144 00:16:19.433 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:19.433 00:16:19.433 real 0m12.845s 00:16:19.433 user 0m48.925s 00:16:19.433 sys 0m1.883s 00:16:19.433 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.433 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.433 ************************************ 00:16:19.433 END TEST nvmf_filesystem_in_capsule 00:16:19.433 ************************************ 00:16:19.433 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:19.433 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:19.434 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:19.691 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:19.691 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:19.691 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:19.691 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:19.691 rmmod nvme_tcp 00:16:19.691 rmmod nvme_fabrics 00:16:19.691 rmmod nvme_keyring 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:19.691 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:16:20.050 00:16:20.050 real 0m28.190s 00:16:20.050 user 1m43.156s 00:16:20.050 sys 0m4.325s 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:20.050 ************************************ 00:16:20.050 END TEST nvmf_filesystem 00:16:20.050 ************************************ 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.050 ************************************ 00:16:20.050 START TEST nvmf_target_discovery 00:16:20.050 ************************************ 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:20.050 * Looking for test storage... 00:16:20.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.050 --rc genhtml_branch_coverage=1 00:16:20.050 --rc genhtml_function_coverage=1 00:16:20.050 --rc genhtml_legend=1 00:16:20.050 --rc geninfo_all_blocks=1 00:16:20.050 --rc geninfo_unexecuted_blocks=1 00:16:20.050 00:16:20.050 ' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.050 --rc genhtml_branch_coverage=1 00:16:20.050 --rc genhtml_function_coverage=1 00:16:20.050 --rc genhtml_legend=1 00:16:20.050 --rc geninfo_all_blocks=1 00:16:20.050 --rc geninfo_unexecuted_blocks=1 00:16:20.050 00:16:20.050 ' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.050 --rc genhtml_branch_coverage=1 00:16:20.050 --rc genhtml_function_coverage=1 00:16:20.050 --rc genhtml_legend=1 00:16:20.050 --rc geninfo_all_blocks=1 00:16:20.050 --rc geninfo_unexecuted_blocks=1 00:16:20.050 00:16:20.050 ' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.050 --rc genhtml_branch_coverage=1 00:16:20.050 --rc genhtml_function_coverage=1 00:16:20.050 --rc genhtml_legend=1 00:16:20.050 --rc geninfo_all_blocks=1 00:16:20.050 --rc geninfo_unexecuted_blocks=1 00:16:20.050 00:16:20.050 ' 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.050 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.324 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:20.325 Cannot find device "nvmf_init_br" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:20.325 Cannot find device "nvmf_init_br2" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:20.325 Cannot find device "nvmf_tgt_br" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.325 Cannot find device "nvmf_tgt_br2" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:20.325 Cannot find device "nvmf_init_br" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:20.325 Cannot find device "nvmf_init_br2" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:20.325 Cannot find device "nvmf_tgt_br" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:20.325 Cannot find device "nvmf_tgt_br2" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:20.325 Cannot find device "nvmf_br" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:20.325 Cannot find device "nvmf_init_if" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:20.325 Cannot find device "nvmf_init_if2" 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:20.325 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:20.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:20.585 00:16:20.585 --- 10.0.0.3 ping statistics --- 00:16:20.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.585 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:20.585 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:20.585 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:16:20.585 00:16:20.585 --- 10.0.0.4 ping statistics --- 00:16:20.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.585 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:20.585 00:16:20.585 --- 10.0.0.1 ping statistics --- 00:16:20.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.585 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:20.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:16:20.585 00:16:20.585 --- 10.0.0.2 ping statistics --- 00:16:20.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.585 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72710 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72710 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72710 ']' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.585 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.585 [2024-12-06 13:10:27.027514] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:20.585 [2024-12-06 13:10:27.027622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.844 [2024-12-06 13:10:27.180342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.844 [2024-12-06 13:10:27.214626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.844 [2024-12-06 13:10:27.214677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.844 [2024-12-06 13:10:27.214688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.844 [2024-12-06 13:10:27.214697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.844 [2024-12-06 13:10:27.214704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.844 [2024-12-06 13:10:27.215535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.844 [2024-12-06 13:10:27.215629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.844 [2024-12-06 13:10:27.215575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.844 [2024-12-06 13:10:27.215636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 [2024-12-06 13:10:28.141391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 Null1 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 [2024-12-06 13:10:28.185527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 Null2 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.780 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 Null3 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 Null4 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.781 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 4420 00:16:22.041 00:16:22.041 Discovery Log Number of Records 6, Generation counter 6 00:16:22.041 =====Discovery Log Entry 0====== 00:16:22.041 trtype: tcp 00:16:22.041 adrfam: ipv4 00:16:22.041 subtype: current discovery subsystem 00:16:22.041 treq: not required 00:16:22.041 portid: 0 00:16:22.041 trsvcid: 4420 00:16:22.041 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:22.041 traddr: 10.0.0.3 00:16:22.041 eflags: explicit discovery connections, duplicate discovery information 00:16:22.041 sectype: none 00:16:22.041 =====Discovery Log Entry 1====== 00:16:22.041 trtype: tcp 00:16:22.041 adrfam: ipv4 00:16:22.041 subtype: nvme subsystem 00:16:22.041 treq: not required 00:16:22.041 portid: 0 00:16:22.041 trsvcid: 4420 00:16:22.041 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:22.041 traddr: 10.0.0.3 00:16:22.041 eflags: none 00:16:22.041 sectype: none 00:16:22.041 =====Discovery Log Entry 2====== 00:16:22.041 trtype: tcp 00:16:22.041 adrfam: ipv4 00:16:22.041 subtype: nvme subsystem 00:16:22.041 treq: not required 00:16:22.041 portid: 0 00:16:22.041 trsvcid: 4420 00:16:22.041 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:22.041 traddr: 10.0.0.3 00:16:22.041 eflags: none 00:16:22.041 sectype: none 00:16:22.041 =====Discovery Log Entry 3====== 00:16:22.041 trtype: tcp 00:16:22.041 adrfam: ipv4 00:16:22.041 subtype: nvme subsystem 00:16:22.041 treq: not required 00:16:22.041 portid: 0 00:16:22.041 trsvcid: 4420 00:16:22.041 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:22.041 traddr: 10.0.0.3 00:16:22.041 eflags: none 00:16:22.041 sectype: none 00:16:22.041 =====Discovery Log Entry 4====== 00:16:22.041 trtype: tcp 00:16:22.041 adrfam: ipv4 00:16:22.041 subtype: nvme subsystem 00:16:22.041 treq: not required 00:16:22.041 portid: 0 00:16:22.041 trsvcid: 4420 00:16:22.041 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:22.041 traddr: 10.0.0.3 00:16:22.041 eflags: none 00:16:22.041 sectype: none 00:16:22.041 =====Discovery Log Entry 5====== 00:16:22.041 trtype: tcp 00:16:22.041 adrfam: ipv4 00:16:22.041 subtype: discovery subsystem referral 00:16:22.041 treq: not required 00:16:22.041 portid: 0 00:16:22.041 trsvcid: 4430 00:16:22.041 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:22.041 traddr: 10.0.0.3 00:16:22.041 eflags: none 00:16:22.041 sectype: none 00:16:22.041 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:22.041 Perform nvmf subsystem discovery via RPC 00:16:22.041 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:22.041 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.041 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.041 [ 00:16:22.041 { 00:16:22.041 "allow_any_host": true, 00:16:22.041 "hosts": [], 00:16:22.041 "listen_addresses": [ 00:16:22.041 { 00:16:22.041 "adrfam": "IPv4", 00:16:22.041 "traddr": "10.0.0.3", 00:16:22.041 "trsvcid": "4420", 00:16:22.041 "trtype": "TCP" 00:16:22.041 } 00:16:22.041 ], 00:16:22.041 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:22.041 "subtype": "Discovery" 00:16:22.041 }, 00:16:22.041 { 00:16:22.041 "allow_any_host": true, 00:16:22.041 "hosts": [], 00:16:22.041 "listen_addresses": [ 00:16:22.041 { 00:16:22.041 "adrfam": "IPv4", 00:16:22.041 "traddr": "10.0.0.3", 00:16:22.041 "trsvcid": "4420", 00:16:22.041 "trtype": "TCP" 00:16:22.041 } 00:16:22.041 ], 00:16:22.041 "max_cntlid": 65519, 00:16:22.041 "max_namespaces": 32, 00:16:22.041 "min_cntlid": 1, 00:16:22.041 "model_number": "SPDK bdev Controller", 00:16:22.041 "namespaces": [ 00:16:22.041 { 00:16:22.041 "bdev_name": "Null1", 00:16:22.041 "name": "Null1", 00:16:22.041 "nguid": "E527C35D6EC54986A1750BF5A405AA0C", 00:16:22.041 "nsid": 1, 00:16:22.041 "uuid": "e527c35d-6ec5-4986-a175-0bf5a405aa0c" 00:16:22.041 } 00:16:22.041 ], 00:16:22.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.041 "serial_number": "SPDK00000000000001", 00:16:22.041 "subtype": "NVMe" 00:16:22.041 }, 00:16:22.041 { 00:16:22.041 "allow_any_host": true, 00:16:22.041 "hosts": [], 00:16:22.041 "listen_addresses": [ 00:16:22.041 { 00:16:22.041 "adrfam": "IPv4", 00:16:22.041 "traddr": "10.0.0.3", 00:16:22.041 "trsvcid": "4420", 00:16:22.041 "trtype": "TCP" 00:16:22.041 } 00:16:22.041 ], 00:16:22.042 "max_cntlid": 65519, 00:16:22.042 "max_namespaces": 32, 00:16:22.042 "min_cntlid": 1, 00:16:22.042 "model_number": "SPDK bdev Controller", 00:16:22.042 "namespaces": [ 00:16:22.042 { 00:16:22.042 "bdev_name": "Null2", 00:16:22.042 "name": "Null2", 00:16:22.042 "nguid": "48C1DD4C6193426AA347154661470D2B", 00:16:22.042 "nsid": 1, 00:16:22.042 "uuid": "48c1dd4c-6193-426a-a347-154661470d2b" 00:16:22.042 } 00:16:22.042 ], 00:16:22.042 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:22.042 "serial_number": "SPDK00000000000002", 00:16:22.042 "subtype": "NVMe" 00:16:22.042 }, 00:16:22.042 { 00:16:22.042 "allow_any_host": true, 00:16:22.042 "hosts": [], 00:16:22.042 "listen_addresses": [ 00:16:22.042 { 00:16:22.042 "adrfam": "IPv4", 00:16:22.042 "traddr": "10.0.0.3", 00:16:22.042 "trsvcid": "4420", 00:16:22.042 "trtype": "TCP" 00:16:22.042 } 00:16:22.042 ], 00:16:22.042 "max_cntlid": 65519, 00:16:22.042 "max_namespaces": 32, 00:16:22.042 "min_cntlid": 1, 00:16:22.042 "model_number": "SPDK bdev Controller", 00:16:22.042 "namespaces": [ 00:16:22.042 { 00:16:22.042 "bdev_name": "Null3", 00:16:22.042 "name": "Null3", 00:16:22.042 "nguid": "4D4B63E8C42542F6A684358E36B5204E", 00:16:22.042 "nsid": 1, 00:16:22.042 "uuid": "4d4b63e8-c425-42f6-a684-358e36b5204e" 00:16:22.042 } 00:16:22.042 ], 00:16:22.042 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:22.042 "serial_number": "SPDK00000000000003", 00:16:22.042 "subtype": "NVMe" 00:16:22.042 }, 00:16:22.042 { 00:16:22.042 "allow_any_host": true, 00:16:22.042 "hosts": [], 00:16:22.042 "listen_addresses": [ 00:16:22.042 { 00:16:22.042 "adrfam": "IPv4", 00:16:22.042 "traddr": "10.0.0.3", 00:16:22.042 "trsvcid": "4420", 00:16:22.042 "trtype": "TCP" 00:16:22.042 } 00:16:22.042 ], 00:16:22.042 "max_cntlid": 65519, 00:16:22.042 "max_namespaces": 32, 00:16:22.042 "min_cntlid": 1, 00:16:22.042 "model_number": "SPDK bdev Controller", 00:16:22.042 "namespaces": [ 00:16:22.042 { 00:16:22.042 "bdev_name": "Null4", 00:16:22.042 "name": "Null4", 00:16:22.042 "nguid": "6ACD89CA0C60455396B111E53D17CE62", 00:16:22.042 "nsid": 1, 00:16:22.042 "uuid": "6acd89ca-0c60-4553-96b1-11e53d17ce62" 00:16:22.042 } 00:16:22.042 ], 00:16:22.042 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:22.042 "serial_number": "SPDK00000000000004", 00:16:22.042 "subtype": "NVMe" 00:16:22.042 } 00:16:22.042 ] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.042 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.302 rmmod nvme_tcp 00:16:22.302 rmmod nvme_fabrics 00:16:22.302 rmmod nvme_keyring 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72710 ']' 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72710 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72710 ']' 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72710 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72710 00:16:22.302 killing process with pid 72710 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72710' 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72710 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72710 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:22.302 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:22.562 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:16:22.562 00:16:22.562 real 0m2.720s 00:16:22.562 user 0m7.003s 00:16:22.562 sys 0m0.664s 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.562 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.562 ************************************ 00:16:22.562 END TEST nvmf_target_discovery 00:16:22.562 ************************************ 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.821 ************************************ 00:16:22.821 START TEST nvmf_referrals 00:16:22.821 ************************************ 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:22.821 * Looking for test storage... 00:16:22.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:22.821 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:22.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.822 --rc genhtml_branch_coverage=1 00:16:22.822 --rc genhtml_function_coverage=1 00:16:22.822 --rc genhtml_legend=1 00:16:22.822 --rc geninfo_all_blocks=1 00:16:22.822 --rc geninfo_unexecuted_blocks=1 00:16:22.822 00:16:22.822 ' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:22.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.822 --rc genhtml_branch_coverage=1 00:16:22.822 --rc genhtml_function_coverage=1 00:16:22.822 --rc genhtml_legend=1 00:16:22.822 --rc geninfo_all_blocks=1 00:16:22.822 --rc geninfo_unexecuted_blocks=1 00:16:22.822 00:16:22.822 ' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:22.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.822 --rc genhtml_branch_coverage=1 00:16:22.822 --rc genhtml_function_coverage=1 00:16:22.822 --rc genhtml_legend=1 00:16:22.822 --rc geninfo_all_blocks=1 00:16:22.822 --rc geninfo_unexecuted_blocks=1 00:16:22.822 00:16:22.822 ' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:22.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.822 --rc genhtml_branch_coverage=1 00:16:22.822 --rc genhtml_function_coverage=1 00:16:22.822 --rc genhtml_legend=1 00:16:22.822 --rc geninfo_all_blocks=1 00:16:22.822 --rc geninfo_unexecuted_blocks=1 00:16:22.822 00:16:22.822 ' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.822 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:22.822 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:22.823 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.081 Cannot find device "nvmf_init_br" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.081 Cannot find device "nvmf_init_br2" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.081 Cannot find device "nvmf_tgt_br" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.081 Cannot find device "nvmf_tgt_br2" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.081 Cannot find device "nvmf_init_br" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.081 Cannot find device "nvmf_init_br2" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.081 Cannot find device "nvmf_tgt_br" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.081 Cannot find device "nvmf_tgt_br2" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.081 Cannot find device "nvmf_br" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.081 Cannot find device "nvmf_init_if" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.081 Cannot find device "nvmf_init_if2" 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.081 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.082 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:16:23.340 00:16:23.340 --- 10.0.0.3 ping statistics --- 00:16:23.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.340 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.340 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.340 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:23.340 00:16:23.340 --- 10.0.0.4 ping statistics --- 00:16:23.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.340 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:23.340 00:16:23.340 --- 10.0.0.1 ping statistics --- 00:16:23.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.340 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:23.340 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:23.340 00:16:23.340 --- 10.0.0.2 ping statistics --- 00:16:23.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.341 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72993 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72993 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72993 ']' 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.341 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.341 [2024-12-06 13:10:29.816194] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:23.341 [2024-12-06 13:10:29.816292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.599 [2024-12-06 13:10:29.967680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.599 [2024-12-06 13:10:30.007147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.599 [2024-12-06 13:10:30.007212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.599 [2024-12-06 13:10:30.007227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.599 [2024-12-06 13:10:30.007237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.599 [2024-12-06 13:10:30.007245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.599 [2024-12-06 13:10:30.010985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.599 [2024-12-06 13:10:30.011173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.599 [2024-12-06 13:10:30.011246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.599 [2024-12-06 13:10:30.011256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.599 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.599 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:23.599 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:23.599 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.599 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 [2024-12-06 13:10:30.149045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 [2024-12-06 13:10:30.161216] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:23.859 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.118 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:24.119 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.377 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:24.378 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:24.636 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:24.636 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:24.636 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:24.636 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:24.636 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:24.636 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:24.636 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:24.637 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:24.897 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:25.156 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.416 rmmod nvme_tcp 00:16:25.416 rmmod nvme_fabrics 00:16:25.416 rmmod nvme_keyring 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72993 ']' 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72993 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72993 ']' 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72993 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72993 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.416 killing process with pid 72993 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72993' 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72993 00:16:25.416 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72993 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:25.717 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:16:25.717 00:16:25.717 real 0m3.094s 00:16:25.717 user 0m8.966s 00:16:25.717 sys 0m0.925s 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.717 ************************************ 00:16:25.717 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.717 END TEST nvmf_referrals 00:16:25.717 ************************************ 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.976 ************************************ 00:16:25.976 START TEST nvmf_connect_disconnect 00:16:25.976 ************************************ 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:25.976 * Looking for test storage... 00:16:25.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.976 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:25.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.977 --rc genhtml_branch_coverage=1 00:16:25.977 --rc genhtml_function_coverage=1 00:16:25.977 --rc genhtml_legend=1 00:16:25.977 --rc geninfo_all_blocks=1 00:16:25.977 --rc geninfo_unexecuted_blocks=1 00:16:25.977 00:16:25.977 ' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:25.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.977 --rc genhtml_branch_coverage=1 00:16:25.977 --rc genhtml_function_coverage=1 00:16:25.977 --rc genhtml_legend=1 00:16:25.977 --rc geninfo_all_blocks=1 00:16:25.977 --rc geninfo_unexecuted_blocks=1 00:16:25.977 00:16:25.977 ' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:25.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.977 --rc genhtml_branch_coverage=1 00:16:25.977 --rc genhtml_function_coverage=1 00:16:25.977 --rc genhtml_legend=1 00:16:25.977 --rc geninfo_all_blocks=1 00:16:25.977 --rc geninfo_unexecuted_blocks=1 00:16:25.977 00:16:25.977 ' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:25.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.977 --rc genhtml_branch_coverage=1 00:16:25.977 --rc genhtml_function_coverage=1 00:16:25.977 --rc genhtml_legend=1 00:16:25.977 --rc geninfo_all_blocks=1 00:16:25.977 --rc geninfo_unexecuted_blocks=1 00:16:25.977 00:16:25.977 ' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.977 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.978 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:26.236 Cannot find device "nvmf_init_br" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:26.236 Cannot find device "nvmf_init_br2" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:26.236 Cannot find device "nvmf_tgt_br" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.236 Cannot find device "nvmf_tgt_br2" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:26.236 Cannot find device "nvmf_init_br" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:26.236 Cannot find device "nvmf_init_br2" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:26.236 Cannot find device "nvmf_tgt_br" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:26.236 Cannot find device "nvmf_tgt_br2" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:26.236 Cannot find device "nvmf_br" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:26.236 Cannot find device "nvmf_init_if" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:26.236 Cannot find device "nvmf_init_if2" 00:16:26.236 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.237 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:26.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.163 ms 00:16:26.496 00:16:26.496 --- 10.0.0.3 ping statistics --- 00:16:26.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.496 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:26.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:26.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:16:26.496 00:16:26.496 --- 10.0.0.4 ping statistics --- 00:16:26.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.496 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:26.496 00:16:26.496 --- 10.0.0.1 ping statistics --- 00:16:26.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.496 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:26.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:26.496 00:16:26.496 --- 10.0.0.2 ping statistics --- 00:16:26.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.496 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=73337 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 73337 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 73337 ']' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.496 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:26.496 [2024-12-06 13:10:32.993333] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:26.496 [2024-12-06 13:10:32.993868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.755 [2024-12-06 13:10:33.142383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.755 [2024-12-06 13:10:33.185184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.755 [2024-12-06 13:10:33.185235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.755 [2024-12-06 13:10:33.185246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.755 [2024-12-06 13:10:33.185254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.755 [2024-12-06 13:10:33.185261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.755 [2024-12-06 13:10:33.186061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.755 [2024-12-06 13:10:33.186214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.755 [2024-12-06 13:10:33.187532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.755 [2024-12-06 13:10:33.187578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.014 [2024-12-06 13:10:33.328963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.014 [2024-12-06 13:10:33.390668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:27.014 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:29.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:38.425 rmmod nvme_tcp 00:16:38.425 rmmod nvme_fabrics 00:16:38.425 rmmod nvme_keyring 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 73337 ']' 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 73337 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 73337 ']' 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 73337 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.425 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73337 00:16:38.688 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.688 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.688 killing process with pid 73337 00:16:38.688 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73337' 00:16:38.688 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 73337 00:16:38.688 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 73337 00:16:38.688 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.688 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.688 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.688 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:38.688 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:38.689 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:39.017 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:39.017 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:39.017 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:16:39.018 00:16:39.018 real 0m13.096s 00:16:39.018 user 0m46.721s 00:16:39.018 sys 0m1.933s 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:39.018 ************************************ 00:16:39.018 END TEST nvmf_connect_disconnect 00:16:39.018 ************************************ 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.018 ************************************ 00:16:39.018 START TEST nvmf_multitarget 00:16:39.018 ************************************ 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:39.018 * Looking for test storage... 00:16:39.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:39.018 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:39.276 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:39.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.277 --rc genhtml_branch_coverage=1 00:16:39.277 --rc genhtml_function_coverage=1 00:16:39.277 --rc genhtml_legend=1 00:16:39.277 --rc geninfo_all_blocks=1 00:16:39.277 --rc geninfo_unexecuted_blocks=1 00:16:39.277 00:16:39.277 ' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:39.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.277 --rc genhtml_branch_coverage=1 00:16:39.277 --rc genhtml_function_coverage=1 00:16:39.277 --rc genhtml_legend=1 00:16:39.277 --rc geninfo_all_blocks=1 00:16:39.277 --rc geninfo_unexecuted_blocks=1 00:16:39.277 00:16:39.277 ' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:39.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.277 --rc genhtml_branch_coverage=1 00:16:39.277 --rc genhtml_function_coverage=1 00:16:39.277 --rc genhtml_legend=1 00:16:39.277 --rc geninfo_all_blocks=1 00:16:39.277 --rc geninfo_unexecuted_blocks=1 00:16:39.277 00:16:39.277 ' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:39.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.277 --rc genhtml_branch_coverage=1 00:16:39.277 --rc genhtml_function_coverage=1 00:16:39.277 --rc genhtml_legend=1 00:16:39.277 --rc geninfo_all_blocks=1 00:16:39.277 --rc geninfo_unexecuted_blocks=1 00:16:39.277 00:16:39.277 ' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.277 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:39.277 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:39.278 Cannot find device "nvmf_init_br" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:39.278 Cannot find device "nvmf_init_br2" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:39.278 Cannot find device "nvmf_tgt_br" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.278 Cannot find device "nvmf_tgt_br2" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:39.278 Cannot find device "nvmf_init_br" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:39.278 Cannot find device "nvmf_init_br2" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:39.278 Cannot find device "nvmf_tgt_br" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:39.278 Cannot find device "nvmf_tgt_br2" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:39.278 Cannot find device "nvmf_br" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:39.278 Cannot find device "nvmf_init_if" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:39.278 Cannot find device "nvmf_init_if2" 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:16:39.278 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.536 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:39.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:16:39.536 00:16:39.536 --- 10.0.0.3 ping statistics --- 00:16:39.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.536 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:39.536 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:39.536 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:16:39.536 00:16:39.536 --- 10.0.0.4 ping statistics --- 00:16:39.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.536 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:39.536 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:16:39.537 00:16:39.537 --- 10.0.0.1 ping statistics --- 00:16:39.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.537 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:39.537 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:39.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:39.537 00:16:39.537 --- 10.0.0.2 ping statistics --- 00:16:39.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.537 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:39.537 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.537 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:16:39.537 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:39.537 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73781 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73781 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73781 ']' 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.795 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:39.795 [2024-12-06 13:10:46.152773] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:39.795 [2024-12-06 13:10:46.152875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.795 [2024-12-06 13:10:46.310065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.053 [2024-12-06 13:10:46.352700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.053 [2024-12-06 13:10:46.352764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.053 [2024-12-06 13:10:46.352778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.053 [2024-12-06 13:10:46.352788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.053 [2024-12-06 13:10:46.352798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.053 [2024-12-06 13:10:46.353818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.053 [2024-12-06 13:10:46.353974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.053 [2024-12-06 13:10:46.354017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.053 [2024-12-06 13:10:46.354020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:40.988 "nvmf_tgt_1" 00:16:40.988 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:41.247 "nvmf_tgt_2" 00:16:41.247 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:41.247 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:41.247 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:41.247 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:41.505 true 00:16:41.505 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:41.505 true 00:16:41.505 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:41.505 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.764 rmmod nvme_tcp 00:16:41.764 rmmod nvme_fabrics 00:16:41.764 rmmod nvme_keyring 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73781 ']' 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73781 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73781 ']' 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73781 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73781 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.764 killing process with pid 73781 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73781' 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73781 00:16:41.764 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73781 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:42.022 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:42.023 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:42.023 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:42.023 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:42.280 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:42.280 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.280 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.280 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:16:42.281 00:16:42.281 real 0m3.241s 00:16:42.281 user 0m9.681s 00:16:42.281 sys 0m0.742s 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.281 ************************************ 00:16:42.281 END TEST nvmf_multitarget 00:16:42.281 ************************************ 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.281 ************************************ 00:16:42.281 START TEST nvmf_rpc 00:16:42.281 ************************************ 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:42.281 * Looking for test storage... 00:16:42.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:42.281 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:42.540 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:42.540 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.540 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.540 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.540 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.540 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:42.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.541 --rc genhtml_branch_coverage=1 00:16:42.541 --rc genhtml_function_coverage=1 00:16:42.541 --rc genhtml_legend=1 00:16:42.541 --rc geninfo_all_blocks=1 00:16:42.541 --rc geninfo_unexecuted_blocks=1 00:16:42.541 00:16:42.541 ' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:42.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.541 --rc genhtml_branch_coverage=1 00:16:42.541 --rc genhtml_function_coverage=1 00:16:42.541 --rc genhtml_legend=1 00:16:42.541 --rc geninfo_all_blocks=1 00:16:42.541 --rc geninfo_unexecuted_blocks=1 00:16:42.541 00:16:42.541 ' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:42.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.541 --rc genhtml_branch_coverage=1 00:16:42.541 --rc genhtml_function_coverage=1 00:16:42.541 --rc genhtml_legend=1 00:16:42.541 --rc geninfo_all_blocks=1 00:16:42.541 --rc geninfo_unexecuted_blocks=1 00:16:42.541 00:16:42.541 ' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:42.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.541 --rc genhtml_branch_coverage=1 00:16:42.541 --rc genhtml_function_coverage=1 00:16:42.541 --rc genhtml_legend=1 00:16:42.541 --rc geninfo_all_blocks=1 00:16:42.541 --rc geninfo_unexecuted_blocks=1 00:16:42.541 00:16:42.541 ' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:42.541 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:42.541 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:42.542 Cannot find device "nvmf_init_br" 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:42.542 Cannot find device "nvmf_init_br2" 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:42.542 Cannot find device "nvmf_tgt_br" 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.542 Cannot find device "nvmf_tgt_br2" 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:42.542 Cannot find device "nvmf_init_br" 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:42.542 Cannot find device "nvmf_init_br2" 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:16:42.542 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:42.542 Cannot find device "nvmf_tgt_br" 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:42.542 Cannot find device "nvmf_tgt_br2" 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:42.542 Cannot find device "nvmf_br" 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:42.542 Cannot find device "nvmf_init_if" 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:42.542 Cannot find device "nvmf_init_if2" 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:16:42.542 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.801 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:42.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:42.802 00:16:42.802 --- 10.0.0.3 ping statistics --- 00:16:42.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.802 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:42.802 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:42.802 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:16:42.802 00:16:42.802 --- 10.0.0.4 ping statistics --- 00:16:42.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.802 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:42.802 00:16:42.802 --- 10.0.0.1 ping statistics --- 00:16:42.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.802 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:42.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:16:42.802 00:16:42.802 --- 10.0.0.2 ping statistics --- 00:16:42.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.802 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:42.802 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=74064 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 74064 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 74064 ']' 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.061 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.061 [2024-12-06 13:10:49.409350] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:43.061 [2024-12-06 13:10:49.409460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.061 [2024-12-06 13:10:49.570373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.320 [2024-12-06 13:10:49.627364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.320 [2024-12-06 13:10:49.627456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.320 [2024-12-06 13:10:49.627481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.320 [2024-12-06 13:10:49.627498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.320 [2024-12-06 13:10:49.627513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.320 [2024-12-06 13:10:49.628808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.320 [2024-12-06 13:10:49.628878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.320 [2024-12-06 13:10:49.629043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.320 [2024-12-06 13:10:49.629056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:43.320 "poll_groups": [ 00:16:43.320 { 00:16:43.320 "admin_qpairs": 0, 00:16:43.320 "completed_nvme_io": 0, 00:16:43.320 "current_admin_qpairs": 0, 00:16:43.320 "current_io_qpairs": 0, 00:16:43.320 "io_qpairs": 0, 00:16:43.320 "name": "nvmf_tgt_poll_group_000", 00:16:43.320 "pending_bdev_io": 0, 00:16:43.320 "transports": [] 00:16:43.320 }, 00:16:43.320 { 00:16:43.320 "admin_qpairs": 0, 00:16:43.320 "completed_nvme_io": 0, 00:16:43.320 "current_admin_qpairs": 0, 00:16:43.320 "current_io_qpairs": 0, 00:16:43.320 "io_qpairs": 0, 00:16:43.320 "name": "nvmf_tgt_poll_group_001", 00:16:43.320 "pending_bdev_io": 0, 00:16:43.320 "transports": [] 00:16:43.320 }, 00:16:43.320 { 00:16:43.320 "admin_qpairs": 0, 00:16:43.320 "completed_nvme_io": 0, 00:16:43.320 "current_admin_qpairs": 0, 00:16:43.320 "current_io_qpairs": 0, 00:16:43.320 "io_qpairs": 0, 00:16:43.320 "name": "nvmf_tgt_poll_group_002", 00:16:43.320 "pending_bdev_io": 0, 00:16:43.320 "transports": [] 00:16:43.320 }, 00:16:43.320 { 00:16:43.320 "admin_qpairs": 0, 00:16:43.320 "completed_nvme_io": 0, 00:16:43.320 "current_admin_qpairs": 0, 00:16:43.320 "current_io_qpairs": 0, 00:16:43.320 "io_qpairs": 0, 00:16:43.320 "name": "nvmf_tgt_poll_group_003", 00:16:43.320 "pending_bdev_io": 0, 00:16:43.320 "transports": [] 00:16:43.320 } 00:16:43.320 ], 00:16:43.320 "tick_rate": 2200000000 00:16:43.320 }' 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:43.320 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.579 [2024-12-06 13:10:49.896146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:43.579 "poll_groups": [ 00:16:43.579 { 00:16:43.579 "admin_qpairs": 0, 00:16:43.579 "completed_nvme_io": 0, 00:16:43.579 "current_admin_qpairs": 0, 00:16:43.579 "current_io_qpairs": 0, 00:16:43.579 "io_qpairs": 0, 00:16:43.579 "name": "nvmf_tgt_poll_group_000", 00:16:43.579 "pending_bdev_io": 0, 00:16:43.579 "transports": [ 00:16:43.579 { 00:16:43.579 "trtype": "TCP" 00:16:43.579 } 00:16:43.579 ] 00:16:43.579 }, 00:16:43.579 { 00:16:43.579 "admin_qpairs": 0, 00:16:43.579 "completed_nvme_io": 0, 00:16:43.579 "current_admin_qpairs": 0, 00:16:43.579 "current_io_qpairs": 0, 00:16:43.579 "io_qpairs": 0, 00:16:43.579 "name": "nvmf_tgt_poll_group_001", 00:16:43.579 "pending_bdev_io": 0, 00:16:43.579 "transports": [ 00:16:43.579 { 00:16:43.579 "trtype": "TCP" 00:16:43.579 } 00:16:43.579 ] 00:16:43.579 }, 00:16:43.579 { 00:16:43.579 "admin_qpairs": 0, 00:16:43.579 "completed_nvme_io": 0, 00:16:43.579 "current_admin_qpairs": 0, 00:16:43.579 "current_io_qpairs": 0, 00:16:43.579 "io_qpairs": 0, 00:16:43.579 "name": "nvmf_tgt_poll_group_002", 00:16:43.579 "pending_bdev_io": 0, 00:16:43.579 "transports": [ 00:16:43.579 { 00:16:43.579 "trtype": "TCP" 00:16:43.579 } 00:16:43.579 ] 00:16:43.579 }, 00:16:43.579 { 00:16:43.579 "admin_qpairs": 0, 00:16:43.579 "completed_nvme_io": 0, 00:16:43.579 "current_admin_qpairs": 0, 00:16:43.579 "current_io_qpairs": 0, 00:16:43.579 "io_qpairs": 0, 00:16:43.579 "name": "nvmf_tgt_poll_group_003", 00:16:43.579 "pending_bdev_io": 0, 00:16:43.579 "transports": [ 00:16:43.579 { 00:16:43.579 "trtype": "TCP" 00:16:43.579 } 00:16:43.579 ] 00:16:43.579 } 00:16:43.579 ], 00:16:43.579 "tick_rate": 2200000000 00:16:43.579 }' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:43.579 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.579 Malloc1 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.579 [2024-12-06 13:10:50.093593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -a 10.0.0.3 -s 4420 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -a 10.0.0.3 -s 4420 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.579 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -a 10.0.0.3 -s 4420 00:16:43.838 [2024-12-06 13:10:50.121930] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2' 00:16:43.838 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:43.838 could not add new controller: failed to write to nvme-fabrics device 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.838 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.839 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.839 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:46.370 [2024-12-06 13:10:52.423125] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2' 00:16:46.370 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:46.370 could not add new controller: failed to write to nvme-fabrics device 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:46.370 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:48.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.266 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.524 [2024-12-06 13:10:54.812546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:48.524 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:51.056 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:51.056 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:51.056 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.056 [2024-12-06 13:10:57.207784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:51.056 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.982 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.982 [2024-12-06 13:10:59.507176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:53.241 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 [2024-12-06 13:11:01.798919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:55.772 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:57.674 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:57.674 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:57.674 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 [2024-12-06 13:11:04.106220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.674 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:57.933 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.933 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:57.933 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.933 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:57.933 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:59.835 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:59.835 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:59.835 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.835 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:59.835 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.835 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:59.835 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.094 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 [2024-12-06 13:11:06.417541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 [2024-12-06 13:11:06.465623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 [2024-12-06 13:11:06.513647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 [2024-12-06 13:11:06.561686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.095 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.096 [2024-12-06 13:11:06.609754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.096 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.354 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:00.354 "poll_groups": [ 00:17:00.354 { 00:17:00.354 "admin_qpairs": 2, 00:17:00.355 "completed_nvme_io": 66, 00:17:00.355 "current_admin_qpairs": 0, 00:17:00.355 "current_io_qpairs": 0, 00:17:00.355 "io_qpairs": 16, 00:17:00.355 "name": "nvmf_tgt_poll_group_000", 00:17:00.355 "pending_bdev_io": 0, 00:17:00.355 "transports": [ 00:17:00.355 { 00:17:00.355 "trtype": "TCP" 00:17:00.355 } 00:17:00.355 ] 00:17:00.355 }, 00:17:00.355 { 00:17:00.355 "admin_qpairs": 3, 00:17:00.355 "completed_nvme_io": 67, 00:17:00.355 "current_admin_qpairs": 0, 00:17:00.355 "current_io_qpairs": 0, 00:17:00.355 "io_qpairs": 17, 00:17:00.355 "name": "nvmf_tgt_poll_group_001", 00:17:00.355 "pending_bdev_io": 0, 00:17:00.355 "transports": [ 00:17:00.355 { 00:17:00.355 "trtype": "TCP" 00:17:00.355 } 00:17:00.355 ] 00:17:00.355 }, 00:17:00.355 { 00:17:00.355 "admin_qpairs": 1, 00:17:00.355 "completed_nvme_io": 119, 00:17:00.355 "current_admin_qpairs": 0, 00:17:00.355 "current_io_qpairs": 0, 00:17:00.355 "io_qpairs": 19, 00:17:00.355 "name": "nvmf_tgt_poll_group_002", 00:17:00.355 "pending_bdev_io": 0, 00:17:00.355 "transports": [ 00:17:00.355 { 00:17:00.355 "trtype": "TCP" 00:17:00.355 } 00:17:00.355 ] 00:17:00.355 }, 00:17:00.355 { 00:17:00.355 "admin_qpairs": 1, 00:17:00.355 "completed_nvme_io": 168, 00:17:00.355 "current_admin_qpairs": 0, 00:17:00.355 "current_io_qpairs": 0, 00:17:00.355 "io_qpairs": 18, 00:17:00.355 "name": "nvmf_tgt_poll_group_003", 00:17:00.355 "pending_bdev_io": 0, 00:17:00.355 "transports": [ 00:17:00.355 { 00:17:00.355 "trtype": "TCP" 00:17:00.355 } 00:17:00.355 ] 00:17:00.355 } 00:17:00.355 ], 00:17:00.355 "tick_rate": 2200000000 00:17:00.355 }' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.355 rmmod nvme_tcp 00:17:00.355 rmmod nvme_fabrics 00:17:00.355 rmmod nvme_keyring 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 74064 ']' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 74064 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 74064 ']' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 74064 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74064 00:17:00.355 killing process with pid 74064 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74064' 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 74064 00:17:00.355 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 74064 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:00.614 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:17:00.873 00:17:00.873 real 0m18.545s 00:17:00.873 user 1m8.402s 00:17:00.873 sys 0m2.557s 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.873 ************************************ 00:17:00.873 END TEST nvmf_rpc 00:17:00.873 ************************************ 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.873 ************************************ 00:17:00.873 START TEST nvmf_invalid 00:17:00.873 ************************************ 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:00.873 * Looking for test storage... 00:17:00.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:00.873 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.131 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:01.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.132 --rc genhtml_branch_coverage=1 00:17:01.132 --rc genhtml_function_coverage=1 00:17:01.132 --rc genhtml_legend=1 00:17:01.132 --rc geninfo_all_blocks=1 00:17:01.132 --rc geninfo_unexecuted_blocks=1 00:17:01.132 00:17:01.132 ' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:01.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.132 --rc genhtml_branch_coverage=1 00:17:01.132 --rc genhtml_function_coverage=1 00:17:01.132 --rc genhtml_legend=1 00:17:01.132 --rc geninfo_all_blocks=1 00:17:01.132 --rc geninfo_unexecuted_blocks=1 00:17:01.132 00:17:01.132 ' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:01.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.132 --rc genhtml_branch_coverage=1 00:17:01.132 --rc genhtml_function_coverage=1 00:17:01.132 --rc genhtml_legend=1 00:17:01.132 --rc geninfo_all_blocks=1 00:17:01.132 --rc geninfo_unexecuted_blocks=1 00:17:01.132 00:17:01.132 ' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:01.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.132 --rc genhtml_branch_coverage=1 00:17:01.132 --rc genhtml_function_coverage=1 00:17:01.132 --rc genhtml_legend=1 00:17:01.132 --rc geninfo_all_blocks=1 00:17:01.132 --rc geninfo_unexecuted_blocks=1 00:17:01.132 00:17:01.132 ' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:01.132 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:01.133 Cannot find device "nvmf_init_br" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:01.133 Cannot find device "nvmf_init_br2" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:01.133 Cannot find device "nvmf_tgt_br" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.133 Cannot find device "nvmf_tgt_br2" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:01.133 Cannot find device "nvmf_init_br" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:01.133 Cannot find device "nvmf_init_br2" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:01.133 Cannot find device "nvmf_tgt_br" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:01.133 Cannot find device "nvmf_tgt_br2" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:01.133 Cannot find device "nvmf_br" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:01.133 Cannot find device "nvmf_init_if" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:01.133 Cannot find device "nvmf_init_if2" 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:01.133 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:01.392 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.392 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:01.392 00:17:01.392 --- 10.0.0.3 ping statistics --- 00:17:01.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.392 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:01.392 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:01.392 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:01.392 00:17:01.392 --- 10.0.0.4 ping statistics --- 00:17:01.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.392 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:01.392 00:17:01.392 --- 10.0.0.1 ping statistics --- 00:17:01.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.392 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:01.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:01.392 00:17:01.392 --- 10.0.0.2 ping statistics --- 00:17:01.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.392 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74615 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74615 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74615 ']' 00:17:01.392 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.393 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.393 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.393 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.393 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 [2024-12-06 13:11:07.968301] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:01.651 [2024-12-06 13:11:07.968411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.651 [2024-12-06 13:11:08.120561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.651 [2024-12-06 13:11:08.153780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.651 [2024-12-06 13:11:08.153837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.651 [2024-12-06 13:11:08.153849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.651 [2024-12-06 13:11:08.153858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.651 [2024-12-06 13:11:08.153867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.651 [2024-12-06 13:11:08.154669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.651 [2024-12-06 13:11:08.154719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.651 [2024-12-06 13:11:08.154800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.651 [2024-12-06 13:11:08.154804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.908 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.908 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:01.909 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:01.909 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:01.909 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.909 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.909 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:01.909 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16181 00:17:02.167 [2024-12-06 13:11:08.582444] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:02.167 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/06 13:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16181 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:17:02.167 request: 00:17:02.167 { 00:17:02.167 "method": "nvmf_create_subsystem", 00:17:02.167 "params": { 00:17:02.167 "nqn": "nqn.2016-06.io.spdk:cnode16181", 00:17:02.167 "tgt_name": "foobar" 00:17:02.167 } 00:17:02.167 } 00:17:02.167 Got JSON-RPC error response 00:17:02.167 GoRPCClient: error on JSON-RPC call' 00:17:02.167 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/06 13:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16181 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:17:02.167 request: 00:17:02.167 { 00:17:02.167 "method": "nvmf_create_subsystem", 00:17:02.167 "params": { 00:17:02.167 "nqn": "nqn.2016-06.io.spdk:cnode16181", 00:17:02.167 "tgt_name": "foobar" 00:17:02.167 } 00:17:02.167 } 00:17:02.167 Got JSON-RPC error response 00:17:02.167 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:02.167 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:02.167 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28113 00:17:02.425 [2024-12-06 13:11:08.886745] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28113: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:02.425 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/06 13:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28113 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:17:02.425 request: 00:17:02.425 { 00:17:02.425 "method": "nvmf_create_subsystem", 00:17:02.425 "params": { 00:17:02.425 "nqn": "nqn.2016-06.io.spdk:cnode28113", 00:17:02.425 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:17:02.425 } 00:17:02.425 } 00:17:02.425 Got JSON-RPC error response 00:17:02.425 GoRPCClient: error on JSON-RPC call' 00:17:02.425 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/06 13:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28113 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:17:02.425 request: 00:17:02.425 { 00:17:02.425 "method": "nvmf_create_subsystem", 00:17:02.425 "params": { 00:17:02.425 "nqn": "nqn.2016-06.io.spdk:cnode28113", 00:17:02.425 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:17:02.425 } 00:17:02.425 } 00:17:02.425 Got JSON-RPC error response 00:17:02.425 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:02.425 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:02.425 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode463 00:17:02.992 [2024-12-06 13:11:09.223031] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode463: invalid model number 'SPDK_Controller' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/06 13:11:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode463], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:17:02.992 request: 00:17:02.992 { 00:17:02.992 "method": "nvmf_create_subsystem", 00:17:02.992 "params": { 00:17:02.992 "nqn": "nqn.2016-06.io.spdk:cnode463", 00:17:02.992 "model_number": "SPDK_Controller\u001f" 00:17:02.992 } 00:17:02.992 } 00:17:02.992 Got JSON-RPC error response 00:17:02.992 GoRPCClient: error on JSON-RPC call' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/06 13:11:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode463], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:17:02.992 request: 00:17:02.992 { 00:17:02.992 "method": "nvmf_create_subsystem", 00:17:02.992 "params": { 00:17:02.992 "nqn": "nqn.2016-06.io.spdk:cnode463", 00:17:02.992 "model_number": "SPDK_Controller\u001f" 00:17:02.992 } 00:17:02.992 } 00:17:02.992 Got JSON-RPC error response 00:17:02.992 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.992 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'N"A>v=P?^@UY?J.i4QY@Q' 00:17:02.993 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'N"A>v=P?^@UY?J.i4QY@Q' nqn.2016-06.io.spdk:cnode16661 00:17:03.251 [2024-12-06 13:11:09.647390] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16661: invalid serial number 'N"A>v=P?^@UY?J.i4QY@Q' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/06 13:11:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16661 serial_number:N"A>v=P?^@UY?J.i4QY@Q], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN N"A>v=P?^@UY?J.i4QY@Q 00:17:03.251 request: 00:17:03.251 { 00:17:03.251 "method": "nvmf_create_subsystem", 00:17:03.251 "params": { 00:17:03.251 "nqn": "nqn.2016-06.io.spdk:cnode16661", 00:17:03.251 "serial_number": "N\"A>v=P?^@UY?J.i4QY@Q" 00:17:03.251 } 00:17:03.251 } 00:17:03.251 Got JSON-RPC error response 00:17:03.251 GoRPCClient: error on JSON-RPC call' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/06 13:11:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16661 serial_number:N"A>v=P?^@UY?J.i4QY@Q], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN N"A>v=P?^@UY?J.i4QY@Q 00:17:03.251 request: 00:17:03.251 { 00:17:03.251 "method": "nvmf_create_subsystem", 00:17:03.251 "params": { 00:17:03.251 "nqn": "nqn.2016-06.io.spdk:cnode16661", 00:17:03.251 "serial_number": "N\"A>v=P?^@UY?J.i4QY@Q" 00:17:03.251 } 00:17:03.251 } 00:17:03.251 Got JSON-RPC error response 00:17:03.251 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.251 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:03.510 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'CS5J;R 3kA\8F?O|~;l@F03~c:xmH(:`m'\''YdI*."L' 00:17:03.511 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'CS5J;R 3kA\8F?O|~;l@F03~c:xmH(:`m'\''YdI*."L' nqn.2016-06.io.spdk:cnode5391 00:17:03.768 [2024-12-06 13:11:10.215887] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5391: invalid model number 'CS5J;R 3kA\8F?O|~;l@F03~c:xmH(:`m'YdI*."L' 00:17:03.768 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/06 13:11:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:CS5J;R 3kA\8F?O|~;l@F03~c:xmH(:`m'\''YdI*."L nqn:nqn.2016-06.io.spdk:cnode5391], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN CS5J;R 3kA\8F?O|~;l@F03~c:xmH(:`m'\''YdI*."L 00:17:03.768 request: 00:17:03.768 { 00:17:03.768 "method": "nvmf_create_subsystem", 00:17:03.768 "params": { 00:17:03.768 "nqn": "nqn.2016-06.io.spdk:cnode5391", 00:17:03.768 "model_number": "CS5J;R 3kA\\8F?O|~;l@F03~c:xmH(:`m'\''YdI*.\"L" 00:17:03.768 } 00:17:03.768 } 00:17:03.768 Got JSON-RPC error response 00:17:03.768 GoRPCClient: error on JSON-RPC call' 00:17:03.768 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/06 13:11:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:CS5J;R 3kA\8F?O|~;l@F03~c:xmH(:`m'YdI*."L nqn:nqn.2016-06.io.spdk:cnode5391], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN CS5J;R 3kA\8F?O|~;l@F03~c:xmH(:`m'YdI*."L 00:17:03.768 request: 00:17:03.768 { 00:17:03.768 "method": "nvmf_create_subsystem", 00:17:03.768 "params": { 00:17:03.768 "nqn": "nqn.2016-06.io.spdk:cnode5391", 00:17:03.768 "model_number": "CS5J;R 3kA\\8F?O|~;l@F03~c:xmH(:`m'YdI*.\"L" 00:17:03.768 } 00:17:03.768 } 00:17:03.768 Got JSON-RPC error response 00:17:03.768 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:03.769 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:04.026 [2024-12-06 13:11:10.524187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.283 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:04.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:04.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:04.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:04.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:04.540 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:04.797 [2024-12-06 13:11:11.164829] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:04.797 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/06 13:11:11 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:17:04.797 request: 00:17:04.797 { 00:17:04.797 "method": "nvmf_subsystem_remove_listener", 00:17:04.797 "params": { 00:17:04.797 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.797 "listen_address": { 00:17:04.797 "trtype": "tcp", 00:17:04.797 "traddr": "", 00:17:04.797 "trsvcid": "4421" 00:17:04.797 } 00:17:04.797 } 00:17:04.797 } 00:17:04.797 Got JSON-RPC error response 00:17:04.797 GoRPCClient: error on JSON-RPC call' 00:17:04.797 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/06 13:11:11 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:17:04.797 request: 00:17:04.797 { 00:17:04.797 "method": "nvmf_subsystem_remove_listener", 00:17:04.797 "params": { 00:17:04.797 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.797 "listen_address": { 00:17:04.797 "trtype": "tcp", 00:17:04.797 "traddr": "", 00:17:04.797 "trsvcid": "4421" 00:17:04.797 } 00:17:04.797 } 00:17:04.797 } 00:17:04.797 Got JSON-RPC error response 00:17:04.797 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:04.797 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16945 -i 0 00:17:05.055 [2024-12-06 13:11:11.497225] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16945: invalid cntlid range [0-65519] 00:17:05.055 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/06 13:11:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16945], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:17:05.055 request: 00:17:05.055 { 00:17:05.055 "method": "nvmf_create_subsystem", 00:17:05.055 "params": { 00:17:05.055 "nqn": "nqn.2016-06.io.spdk:cnode16945", 00:17:05.055 "min_cntlid": 0 00:17:05.055 } 00:17:05.055 } 00:17:05.055 Got JSON-RPC error response 00:17:05.055 GoRPCClient: error on JSON-RPC call' 00:17:05.055 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/06 13:11:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16945], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:17:05.055 request: 00:17:05.055 { 00:17:05.055 "method": "nvmf_create_subsystem", 00:17:05.055 "params": { 00:17:05.055 "nqn": "nqn.2016-06.io.spdk:cnode16945", 00:17:05.055 "min_cntlid": 0 00:17:05.055 } 00:17:05.055 } 00:17:05.055 Got JSON-RPC error response 00:17:05.055 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.055 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3850 -i 65520 00:17:05.313 [2024-12-06 13:11:11.769471] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3850: invalid cntlid range [65520-65519] 00:17:05.313 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/06 13:11:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:17:05.313 request: 00:17:05.313 { 00:17:05.313 "method": "nvmf_create_subsystem", 00:17:05.313 "params": { 00:17:05.313 "nqn": "nqn.2016-06.io.spdk:cnode3850", 00:17:05.313 "min_cntlid": 65520 00:17:05.313 } 00:17:05.313 } 00:17:05.313 Got JSON-RPC error response 00:17:05.313 GoRPCClient: error on JSON-RPC call' 00:17:05.313 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/06 13:11:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:17:05.313 request: 00:17:05.313 { 00:17:05.313 "method": "nvmf_create_subsystem", 00:17:05.313 "params": { 00:17:05.313 "nqn": "nqn.2016-06.io.spdk:cnode3850", 00:17:05.313 "min_cntlid": 65520 00:17:05.313 } 00:17:05.313 } 00:17:05.313 Got JSON-RPC error response 00:17:05.313 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.313 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9989 -I 0 00:17:05.571 [2024-12-06 13:11:12.053727] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9989: invalid cntlid range [1-0] 00:17:05.571 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/06 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9989], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:17:05.571 request: 00:17:05.571 { 00:17:05.571 "method": "nvmf_create_subsystem", 00:17:05.571 "params": { 00:17:05.571 "nqn": "nqn.2016-06.io.spdk:cnode9989", 00:17:05.571 "max_cntlid": 0 00:17:05.571 } 00:17:05.571 } 00:17:05.571 Got JSON-RPC error response 00:17:05.571 GoRPCClient: error on JSON-RPC call' 00:17:05.571 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/06 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9989], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:17:05.571 request: 00:17:05.571 { 00:17:05.571 "method": "nvmf_create_subsystem", 00:17:05.571 "params": { 00:17:05.571 "nqn": "nqn.2016-06.io.spdk:cnode9989", 00:17:05.571 "max_cntlid": 0 00:17:05.571 } 00:17:05.571 } 00:17:05.571 Got JSON-RPC error response 00:17:05.571 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.571 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18651 -I 65520 00:17:06.138 [2024-12-06 13:11:12.400413] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18651: invalid cntlid range [1-65520] 00:17:06.138 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/06 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18651], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:17:06.138 request: 00:17:06.138 { 00:17:06.138 "method": "nvmf_create_subsystem", 00:17:06.138 "params": { 00:17:06.138 "nqn": "nqn.2016-06.io.spdk:cnode18651", 00:17:06.138 "max_cntlid": 65520 00:17:06.138 } 00:17:06.138 } 00:17:06.138 Got JSON-RPC error response 00:17:06.138 GoRPCClient: error on JSON-RPC call' 00:17:06.138 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/06 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18651], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:17:06.138 request: 00:17:06.138 { 00:17:06.138 "method": "nvmf_create_subsystem", 00:17:06.138 "params": { 00:17:06.138 "nqn": "nqn.2016-06.io.spdk:cnode18651", 00:17:06.138 "max_cntlid": 65520 00:17:06.138 } 00:17:06.138 } 00:17:06.138 Got JSON-RPC error response 00:17:06.138 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.138 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21738 -i 6 -I 5 00:17:06.396 [2024-12-06 13:11:12.768770] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21738: invalid cntlid range [6-5] 00:17:06.396 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/06 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode21738], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:17:06.396 request: 00:17:06.396 { 00:17:06.396 "method": "nvmf_create_subsystem", 00:17:06.396 "params": { 00:17:06.396 "nqn": "nqn.2016-06.io.spdk:cnode21738", 00:17:06.396 "min_cntlid": 6, 00:17:06.396 "max_cntlid": 5 00:17:06.396 } 00:17:06.396 } 00:17:06.396 Got JSON-RPC error response 00:17:06.396 GoRPCClient: error on JSON-RPC call' 00:17:06.396 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/06 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode21738], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:17:06.396 request: 00:17:06.396 { 00:17:06.396 "method": "nvmf_create_subsystem", 00:17:06.396 "params": { 00:17:06.396 "nqn": "nqn.2016-06.io.spdk:cnode21738", 00:17:06.396 "min_cntlid": 6, 00:17:06.396 "max_cntlid": 5 00:17:06.396 } 00:17:06.396 } 00:17:06.396 Got JSON-RPC error response 00:17:06.396 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.396 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:06.671 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:06.671 { 00:17:06.671 "name": "foobar", 00:17:06.671 "method": "nvmf_delete_target", 00:17:06.671 "req_id": 1 00:17:06.671 } 00:17:06.671 Got JSON-RPC error response 00:17:06.671 response: 00:17:06.671 { 00:17:06.671 "code": -32602, 00:17:06.671 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:06.671 }' 00:17:06.672 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:06.672 { 00:17:06.672 "name": "foobar", 00:17:06.672 "method": "nvmf_delete_target", 00:17:06.672 "req_id": 1 00:17:06.672 } 00:17:06.672 Got JSON-RPC error response 00:17:06.672 response: 00:17:06.672 { 00:17:06.672 "code": -32602, 00:17:06.672 "message": "The specified target doesn't exist, cannot delete it." 00:17:06.672 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:06.672 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:06.672 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:06.672 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.672 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.672 rmmod nvme_tcp 00:17:06.672 rmmod nvme_fabrics 00:17:06.672 rmmod nvme_keyring 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74615 ']' 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74615 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74615 ']' 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74615 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74615 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.672 killing process with pid 74615 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74615' 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74615 00:17:06.672 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74615 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.955 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:17:07.215 00:17:07.215 real 0m6.188s 00:17:07.215 user 0m24.311s 00:17:07.215 sys 0m1.345s 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:07.215 ************************************ 00:17:07.215 END TEST nvmf_invalid 00:17:07.215 ************************************ 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.215 ************************************ 00:17:07.215 START TEST nvmf_connect_stress 00:17:07.215 ************************************ 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:07.215 * Looking for test storage... 00:17:07.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:07.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.215 --rc genhtml_branch_coverage=1 00:17:07.215 --rc genhtml_function_coverage=1 00:17:07.215 --rc genhtml_legend=1 00:17:07.215 --rc geninfo_all_blocks=1 00:17:07.215 --rc geninfo_unexecuted_blocks=1 00:17:07.215 00:17:07.215 ' 00:17:07.215 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:07.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.216 --rc genhtml_branch_coverage=1 00:17:07.216 --rc genhtml_function_coverage=1 00:17:07.216 --rc genhtml_legend=1 00:17:07.216 --rc geninfo_all_blocks=1 00:17:07.216 --rc geninfo_unexecuted_blocks=1 00:17:07.216 00:17:07.216 ' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:07.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.216 --rc genhtml_branch_coverage=1 00:17:07.216 --rc genhtml_function_coverage=1 00:17:07.216 --rc genhtml_legend=1 00:17:07.216 --rc geninfo_all_blocks=1 00:17:07.216 --rc geninfo_unexecuted_blocks=1 00:17:07.216 00:17:07.216 ' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:07.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.216 --rc genhtml_branch_coverage=1 00:17:07.216 --rc genhtml_function_coverage=1 00:17:07.216 --rc genhtml_legend=1 00:17:07.216 --rc geninfo_all_blocks=1 00:17:07.216 --rc geninfo_unexecuted_blocks=1 00:17:07.216 00:17:07.216 ' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.216 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.216 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:07.475 Cannot find device "nvmf_init_br" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:07.475 Cannot find device "nvmf_init_br2" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:07.475 Cannot find device "nvmf_tgt_br" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.475 Cannot find device "nvmf_tgt_br2" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:07.475 Cannot find device "nvmf_init_br" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:07.475 Cannot find device "nvmf_init_br2" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:07.475 Cannot find device "nvmf_tgt_br" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:07.475 Cannot find device "nvmf_tgt_br2" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:07.475 Cannot find device "nvmf_br" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:07.475 Cannot find device "nvmf_init_if" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:07.475 Cannot find device "nvmf_init_if2" 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:07.475 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:07.476 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:07.476 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:07.476 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:07.476 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:07.476 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:07.476 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.476 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:07.734 00:17:07.734 --- 10.0.0.3 ping statistics --- 00:17:07.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.734 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:17:07.734 00:17:07.734 --- 10.0.0.4 ping statistics --- 00:17:07.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.734 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:17:07.734 00:17:07.734 --- 10.0.0.1 ping statistics --- 00:17:07.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.734 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:07.734 00:17:07.734 --- 10.0.0.2 ping statistics --- 00:17:07.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.734 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.734 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=75163 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 75163 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 75163 ']' 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.735 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.735 [2024-12-06 13:11:14.201684] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:07.735 [2024-12-06 13:11:14.201759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.993 [2024-12-06 13:11:14.354657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.993 [2024-12-06 13:11:14.413301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.993 [2024-12-06 13:11:14.413820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.993 [2024-12-06 13:11:14.414385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.993 [2024-12-06 13:11:14.414804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.993 [2024-12-06 13:11:14.415180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.993 [2024-12-06 13:11:14.416649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.993 [2024-12-06 13:11:14.416757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.993 [2024-12-06 13:11:14.417293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 [2024-12-06 13:11:14.575381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 [2024-12-06 13:11:14.595668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 NULL1 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75207 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.511 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.511 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:08.511 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.511 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.511 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.076 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.076 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:09.076 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.076 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.076 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.336 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.336 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:09.336 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.336 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.336 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.594 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.594 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:09.594 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.594 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.594 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.852 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.852 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:09.852 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.852 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.852 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.109 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.109 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:10.109 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.109 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.109 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.674 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.674 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:10.674 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.675 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.675 13:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.933 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.933 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:10.933 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.933 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.933 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.190 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.190 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:11.190 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.190 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.190 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.449 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.449 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:11.449 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.449 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.449 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.709 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.709 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:11.709 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.709 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.709 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.277 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.277 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:12.277 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.277 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.277 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.535 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.535 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:12.535 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.535 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.535 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.792 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.792 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:12.792 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.793 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.793 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.051 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.051 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:13.051 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.051 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.051 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.618 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.618 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:13.618 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.618 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.618 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.877 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.877 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:13.877 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.877 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.877 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.137 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.137 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:14.137 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.137 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.137 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.402 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.402 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:14.402 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.402 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.403 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.661 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.661 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:14.661 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.661 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.661 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.229 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.229 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:15.229 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.230 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.230 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.489 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.489 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:15.489 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.489 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.489 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.747 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.747 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:15.747 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.747 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.747 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.006 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.006 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:16.006 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.006 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.006 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.264 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.264 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:16.264 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.264 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.264 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.832 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.832 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:16.832 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.832 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.832 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.090 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.090 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:17.090 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.090 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.090 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.347 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.347 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:17.347 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.347 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.348 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.605 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.605 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:17.605 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.605 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.605 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.863 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.863 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:17.863 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.863 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.863 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.429 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.430 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:18.430 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.430 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.430 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.430 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75207 00:17:18.687 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75207) - No such process 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75207 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.687 rmmod nvme_tcp 00:17:18.687 rmmod nvme_fabrics 00:17:18.687 rmmod nvme_keyring 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 75163 ']' 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 75163 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 75163 ']' 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 75163 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75163 00:17:18.687 killing process with pid 75163 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75163' 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 75163 00:17:18.687 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 75163 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:18.944 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:17:19.201 00:17:19.201 real 0m12.003s 00:17:19.201 user 0m39.094s 00:17:19.201 sys 0m3.380s 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.201 ************************************ 00:17:19.201 END TEST nvmf_connect_stress 00:17:19.201 ************************************ 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.201 ************************************ 00:17:19.201 START TEST nvmf_fused_ordering 00:17:19.201 ************************************ 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:19.201 * Looking for test storage... 00:17:19.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:19.201 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:19.202 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:19.202 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:19.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.460 --rc genhtml_branch_coverage=1 00:17:19.460 --rc genhtml_function_coverage=1 00:17:19.460 --rc genhtml_legend=1 00:17:19.460 --rc geninfo_all_blocks=1 00:17:19.460 --rc geninfo_unexecuted_blocks=1 00:17:19.460 00:17:19.460 ' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:19.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.460 --rc genhtml_branch_coverage=1 00:17:19.460 --rc genhtml_function_coverage=1 00:17:19.460 --rc genhtml_legend=1 00:17:19.460 --rc geninfo_all_blocks=1 00:17:19.460 --rc geninfo_unexecuted_blocks=1 00:17:19.460 00:17:19.460 ' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:19.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.460 --rc genhtml_branch_coverage=1 00:17:19.460 --rc genhtml_function_coverage=1 00:17:19.460 --rc genhtml_legend=1 00:17:19.460 --rc geninfo_all_blocks=1 00:17:19.460 --rc geninfo_unexecuted_blocks=1 00:17:19.460 00:17:19.460 ' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:19.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.460 --rc genhtml_branch_coverage=1 00:17:19.460 --rc genhtml_function_coverage=1 00:17:19.460 --rc genhtml_legend=1 00:17:19.460 --rc geninfo_all_blocks=1 00:17:19.460 --rc geninfo_unexecuted_blocks=1 00:17:19.460 00:17:19.460 ' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.460 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.461 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:19.461 Cannot find device "nvmf_init_br" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:19.461 Cannot find device "nvmf_init_br2" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:19.461 Cannot find device "nvmf_tgt_br" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.461 Cannot find device "nvmf_tgt_br2" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:19.461 Cannot find device "nvmf_init_br" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:19.461 Cannot find device "nvmf_init_br2" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:19.461 Cannot find device "nvmf_tgt_br" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:19.461 Cannot find device "nvmf_tgt_br2" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:19.461 Cannot find device "nvmf_br" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:19.461 Cannot find device "nvmf_init_if" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:19.461 Cannot find device "nvmf_init_if2" 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.461 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:19.719 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:19.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:19.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:17:19.720 00:17:19.720 --- 10.0.0.3 ping statistics --- 00:17:19.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.720 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:19.720 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:19.720 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:17:19.720 00:17:19.720 --- 10.0.0.4 ping statistics --- 00:17:19.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.720 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:19.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:17:19.720 00:17:19.720 --- 10.0.0.1 ping statistics --- 00:17:19.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.720 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:19.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:19.720 00:17:19.720 --- 10.0.0.2 ping statistics --- 00:17:19.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.720 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.720 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75580 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75580 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75580 ']' 00:17:19.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.978 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.978 [2024-12-06 13:11:26.320786] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:19.978 [2024-12-06 13:11:26.321179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.978 [2024-12-06 13:11:26.475706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.236 [2024-12-06 13:11:26.515260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.236 [2024-12-06 13:11:26.515356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.236 [2024-12-06 13:11:26.515382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.236 [2024-12-06 13:11:26.515402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.236 [2024-12-06 13:11:26.515445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.236 [2024-12-06 13:11:26.516049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 [2024-12-06 13:11:26.669433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 [2024-12-06 13:11:26.685591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 NULL1 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:20.236 [2024-12-06 13:11:26.741874] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:20.236 [2024-12-06 13:11:26.742090] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75616 ] 00:17:20.804 Attached to nqn.2016-06.io.spdk:cnode1 00:17:20.804 Namespace ID: 1 size: 1GB 00:17:20.804 fused_ordering(0) 00:17:20.804 fused_ordering(1) 00:17:20.804 fused_ordering(2) 00:17:20.804 fused_ordering(3) 00:17:20.804 fused_ordering(4) 00:17:20.804 fused_ordering(5) 00:17:20.804 fused_ordering(6) 00:17:20.804 fused_ordering(7) 00:17:20.804 fused_ordering(8) 00:17:20.804 fused_ordering(9) 00:17:20.804 fused_ordering(10) 00:17:20.804 fused_ordering(11) 00:17:20.804 fused_ordering(12) 00:17:20.804 fused_ordering(13) 00:17:20.804 fused_ordering(14) 00:17:20.804 fused_ordering(15) 00:17:20.804 fused_ordering(16) 00:17:20.804 fused_ordering(17) 00:17:20.804 fused_ordering(18) 00:17:20.804 fused_ordering(19) 00:17:20.804 fused_ordering(20) 00:17:20.804 fused_ordering(21) 00:17:20.804 fused_ordering(22) 00:17:20.804 fused_ordering(23) 00:17:20.804 fused_ordering(24) 00:17:20.804 fused_ordering(25) 00:17:20.804 fused_ordering(26) 00:17:20.804 fused_ordering(27) 00:17:20.804 fused_ordering(28) 00:17:20.804 fused_ordering(29) 00:17:20.804 fused_ordering(30) 00:17:20.804 fused_ordering(31) 00:17:20.804 fused_ordering(32) 00:17:20.804 fused_ordering(33) 00:17:20.804 fused_ordering(34) 00:17:20.804 fused_ordering(35) 00:17:20.804 fused_ordering(36) 00:17:20.804 fused_ordering(37) 00:17:20.804 fused_ordering(38) 00:17:20.804 fused_ordering(39) 00:17:20.804 fused_ordering(40) 00:17:20.804 fused_ordering(41) 00:17:20.804 fused_ordering(42) 00:17:20.804 fused_ordering(43) 00:17:20.804 fused_ordering(44) 00:17:20.804 fused_ordering(45) 00:17:20.804 fused_ordering(46) 00:17:20.804 fused_ordering(47) 00:17:20.804 fused_ordering(48) 00:17:20.804 fused_ordering(49) 00:17:20.804 fused_ordering(50) 00:17:20.804 fused_ordering(51) 00:17:20.804 fused_ordering(52) 00:17:20.804 fused_ordering(53) 00:17:20.804 fused_ordering(54) 00:17:20.804 fused_ordering(55) 00:17:20.804 fused_ordering(56) 00:17:20.804 fused_ordering(57) 00:17:20.804 fused_ordering(58) 00:17:20.804 fused_ordering(59) 00:17:20.804 fused_ordering(60) 00:17:20.804 fused_ordering(61) 00:17:20.804 fused_ordering(62) 00:17:20.804 fused_ordering(63) 00:17:20.804 fused_ordering(64) 00:17:20.804 fused_ordering(65) 00:17:20.804 fused_ordering(66) 00:17:20.804 fused_ordering(67) 00:17:20.804 fused_ordering(68) 00:17:20.804 fused_ordering(69) 00:17:20.804 fused_ordering(70) 00:17:20.804 fused_ordering(71) 00:17:20.804 fused_ordering(72) 00:17:20.804 fused_ordering(73) 00:17:20.804 fused_ordering(74) 00:17:20.804 fused_ordering(75) 00:17:20.804 fused_ordering(76) 00:17:20.804 fused_ordering(77) 00:17:20.804 fused_ordering(78) 00:17:20.804 fused_ordering(79) 00:17:20.804 fused_ordering(80) 00:17:20.804 fused_ordering(81) 00:17:20.804 fused_ordering(82) 00:17:20.804 fused_ordering(83) 00:17:20.804 fused_ordering(84) 00:17:20.804 fused_ordering(85) 00:17:20.804 fused_ordering(86) 00:17:20.804 fused_ordering(87) 00:17:20.804 fused_ordering(88) 00:17:20.804 fused_ordering(89) 00:17:20.804 fused_ordering(90) 00:17:20.804 fused_ordering(91) 00:17:20.804 fused_ordering(92) 00:17:20.804 fused_ordering(93) 00:17:20.804 fused_ordering(94) 00:17:20.804 fused_ordering(95) 00:17:20.804 fused_ordering(96) 00:17:20.804 fused_ordering(97) 00:17:20.804 fused_ordering(98) 00:17:20.804 fused_ordering(99) 00:17:20.804 fused_ordering(100) 00:17:20.804 fused_ordering(101) 00:17:20.804 fused_ordering(102) 00:17:20.804 fused_ordering(103) 00:17:20.804 fused_ordering(104) 00:17:20.804 fused_ordering(105) 00:17:20.804 fused_ordering(106) 00:17:20.804 fused_ordering(107) 00:17:20.804 fused_ordering(108) 00:17:20.804 fused_ordering(109) 00:17:20.804 fused_ordering(110) 00:17:20.804 fused_ordering(111) 00:17:20.804 fused_ordering(112) 00:17:20.804 fused_ordering(113) 00:17:20.804 fused_ordering(114) 00:17:20.804 fused_ordering(115) 00:17:20.804 fused_ordering(116) 00:17:20.804 fused_ordering(117) 00:17:20.804 fused_ordering(118) 00:17:20.804 fused_ordering(119) 00:17:20.804 fused_ordering(120) 00:17:20.804 fused_ordering(121) 00:17:20.804 fused_ordering(122) 00:17:20.804 fused_ordering(123) 00:17:20.804 fused_ordering(124) 00:17:20.804 fused_ordering(125) 00:17:20.804 fused_ordering(126) 00:17:20.804 fused_ordering(127) 00:17:20.804 fused_ordering(128) 00:17:20.804 fused_ordering(129) 00:17:20.804 fused_ordering(130) 00:17:20.804 fused_ordering(131) 00:17:20.804 fused_ordering(132) 00:17:20.804 fused_ordering(133) 00:17:20.804 fused_ordering(134) 00:17:20.804 fused_ordering(135) 00:17:20.804 fused_ordering(136) 00:17:20.804 fused_ordering(137) 00:17:20.804 fused_ordering(138) 00:17:20.804 fused_ordering(139) 00:17:20.804 fused_ordering(140) 00:17:20.804 fused_ordering(141) 00:17:20.804 fused_ordering(142) 00:17:20.804 fused_ordering(143) 00:17:20.804 fused_ordering(144) 00:17:20.804 fused_ordering(145) 00:17:20.804 fused_ordering(146) 00:17:20.804 fused_ordering(147) 00:17:20.804 fused_ordering(148) 00:17:20.804 fused_ordering(149) 00:17:20.804 fused_ordering(150) 00:17:20.804 fused_ordering(151) 00:17:20.804 fused_ordering(152) 00:17:20.804 fused_ordering(153) 00:17:20.804 fused_ordering(154) 00:17:20.804 fused_ordering(155) 00:17:20.804 fused_ordering(156) 00:17:20.804 fused_ordering(157) 00:17:20.804 fused_ordering(158) 00:17:20.804 fused_ordering(159) 00:17:20.804 fused_ordering(160) 00:17:20.804 fused_ordering(161) 00:17:20.804 fused_ordering(162) 00:17:20.804 fused_ordering(163) 00:17:20.804 fused_ordering(164) 00:17:20.804 fused_ordering(165) 00:17:20.804 fused_ordering(166) 00:17:20.804 fused_ordering(167) 00:17:20.804 fused_ordering(168) 00:17:20.804 fused_ordering(169) 00:17:20.804 fused_ordering(170) 00:17:20.804 fused_ordering(171) 00:17:20.804 fused_ordering(172) 00:17:20.804 fused_ordering(173) 00:17:20.804 fused_ordering(174) 00:17:20.804 fused_ordering(175) 00:17:20.804 fused_ordering(176) 00:17:20.804 fused_ordering(177) 00:17:20.804 fused_ordering(178) 00:17:20.804 fused_ordering(179) 00:17:20.804 fused_ordering(180) 00:17:20.804 fused_ordering(181) 00:17:20.804 fused_ordering(182) 00:17:20.804 fused_ordering(183) 00:17:20.804 fused_ordering(184) 00:17:20.804 fused_ordering(185) 00:17:20.804 fused_ordering(186) 00:17:20.804 fused_ordering(187) 00:17:20.804 fused_ordering(188) 00:17:20.804 fused_ordering(189) 00:17:20.804 fused_ordering(190) 00:17:20.804 fused_ordering(191) 00:17:20.804 fused_ordering(192) 00:17:20.804 fused_ordering(193) 00:17:20.804 fused_ordering(194) 00:17:20.804 fused_ordering(195) 00:17:20.804 fused_ordering(196) 00:17:20.804 fused_ordering(197) 00:17:20.804 fused_ordering(198) 00:17:20.804 fused_ordering(199) 00:17:20.804 fused_ordering(200) 00:17:20.804 fused_ordering(201) 00:17:20.804 fused_ordering(202) 00:17:20.804 fused_ordering(203) 00:17:20.804 fused_ordering(204) 00:17:20.804 fused_ordering(205) 00:17:21.063 fused_ordering(206) 00:17:21.063 fused_ordering(207) 00:17:21.063 fused_ordering(208) 00:17:21.063 fused_ordering(209) 00:17:21.063 fused_ordering(210) 00:17:21.063 fused_ordering(211) 00:17:21.063 fused_ordering(212) 00:17:21.063 fused_ordering(213) 00:17:21.063 fused_ordering(214) 00:17:21.063 fused_ordering(215) 00:17:21.063 fused_ordering(216) 00:17:21.063 fused_ordering(217) 00:17:21.063 fused_ordering(218) 00:17:21.063 fused_ordering(219) 00:17:21.063 fused_ordering(220) 00:17:21.063 fused_ordering(221) 00:17:21.063 fused_ordering(222) 00:17:21.063 fused_ordering(223) 00:17:21.063 fused_ordering(224) 00:17:21.063 fused_ordering(225) 00:17:21.063 fused_ordering(226) 00:17:21.063 fused_ordering(227) 00:17:21.063 fused_ordering(228) 00:17:21.063 fused_ordering(229) 00:17:21.063 fused_ordering(230) 00:17:21.063 fused_ordering(231) 00:17:21.063 fused_ordering(232) 00:17:21.063 fused_ordering(233) 00:17:21.063 fused_ordering(234) 00:17:21.063 fused_ordering(235) 00:17:21.063 fused_ordering(236) 00:17:21.063 fused_ordering(237) 00:17:21.063 fused_ordering(238) 00:17:21.063 fused_ordering(239) 00:17:21.063 fused_ordering(240) 00:17:21.063 fused_ordering(241) 00:17:21.063 fused_ordering(242) 00:17:21.063 fused_ordering(243) 00:17:21.063 fused_ordering(244) 00:17:21.063 fused_ordering(245) 00:17:21.063 fused_ordering(246) 00:17:21.063 fused_ordering(247) 00:17:21.063 fused_ordering(248) 00:17:21.063 fused_ordering(249) 00:17:21.063 fused_ordering(250) 00:17:21.063 fused_ordering(251) 00:17:21.063 fused_ordering(252) 00:17:21.063 fused_ordering(253) 00:17:21.063 fused_ordering(254) 00:17:21.063 fused_ordering(255) 00:17:21.063 fused_ordering(256) 00:17:21.063 fused_ordering(257) 00:17:21.063 fused_ordering(258) 00:17:21.063 fused_ordering(259) 00:17:21.063 fused_ordering(260) 00:17:21.063 fused_ordering(261) 00:17:21.063 fused_ordering(262) 00:17:21.063 fused_ordering(263) 00:17:21.063 fused_ordering(264) 00:17:21.063 fused_ordering(265) 00:17:21.063 fused_ordering(266) 00:17:21.063 fused_ordering(267) 00:17:21.063 fused_ordering(268) 00:17:21.063 fused_ordering(269) 00:17:21.063 fused_ordering(270) 00:17:21.063 fused_ordering(271) 00:17:21.063 fused_ordering(272) 00:17:21.063 fused_ordering(273) 00:17:21.063 fused_ordering(274) 00:17:21.063 fused_ordering(275) 00:17:21.063 fused_ordering(276) 00:17:21.063 fused_ordering(277) 00:17:21.063 fused_ordering(278) 00:17:21.063 fused_ordering(279) 00:17:21.063 fused_ordering(280) 00:17:21.063 fused_ordering(281) 00:17:21.063 fused_ordering(282) 00:17:21.063 fused_ordering(283) 00:17:21.063 fused_ordering(284) 00:17:21.063 fused_ordering(285) 00:17:21.063 fused_ordering(286) 00:17:21.063 fused_ordering(287) 00:17:21.063 fused_ordering(288) 00:17:21.063 fused_ordering(289) 00:17:21.063 fused_ordering(290) 00:17:21.063 fused_ordering(291) 00:17:21.063 fused_ordering(292) 00:17:21.063 fused_ordering(293) 00:17:21.063 fused_ordering(294) 00:17:21.063 fused_ordering(295) 00:17:21.063 fused_ordering(296) 00:17:21.063 fused_ordering(297) 00:17:21.063 fused_ordering(298) 00:17:21.063 fused_ordering(299) 00:17:21.063 fused_ordering(300) 00:17:21.063 fused_ordering(301) 00:17:21.063 fused_ordering(302) 00:17:21.063 fused_ordering(303) 00:17:21.063 fused_ordering(304) 00:17:21.063 fused_ordering(305) 00:17:21.063 fused_ordering(306) 00:17:21.063 fused_ordering(307) 00:17:21.063 fused_ordering(308) 00:17:21.063 fused_ordering(309) 00:17:21.063 fused_ordering(310) 00:17:21.063 fused_ordering(311) 00:17:21.063 fused_ordering(312) 00:17:21.063 fused_ordering(313) 00:17:21.063 fused_ordering(314) 00:17:21.063 fused_ordering(315) 00:17:21.063 fused_ordering(316) 00:17:21.063 fused_ordering(317) 00:17:21.063 fused_ordering(318) 00:17:21.063 fused_ordering(319) 00:17:21.063 fused_ordering(320) 00:17:21.063 fused_ordering(321) 00:17:21.063 fused_ordering(322) 00:17:21.063 fused_ordering(323) 00:17:21.063 fused_ordering(324) 00:17:21.063 fused_ordering(325) 00:17:21.063 fused_ordering(326) 00:17:21.063 fused_ordering(327) 00:17:21.063 fused_ordering(328) 00:17:21.063 fused_ordering(329) 00:17:21.063 fused_ordering(330) 00:17:21.063 fused_ordering(331) 00:17:21.063 fused_ordering(332) 00:17:21.063 fused_ordering(333) 00:17:21.063 fused_ordering(334) 00:17:21.063 fused_ordering(335) 00:17:21.063 fused_ordering(336) 00:17:21.063 fused_ordering(337) 00:17:21.063 fused_ordering(338) 00:17:21.063 fused_ordering(339) 00:17:21.063 fused_ordering(340) 00:17:21.063 fused_ordering(341) 00:17:21.063 fused_ordering(342) 00:17:21.063 fused_ordering(343) 00:17:21.063 fused_ordering(344) 00:17:21.063 fused_ordering(345) 00:17:21.063 fused_ordering(346) 00:17:21.063 fused_ordering(347) 00:17:21.063 fused_ordering(348) 00:17:21.063 fused_ordering(349) 00:17:21.063 fused_ordering(350) 00:17:21.063 fused_ordering(351) 00:17:21.063 fused_ordering(352) 00:17:21.063 fused_ordering(353) 00:17:21.063 fused_ordering(354) 00:17:21.063 fused_ordering(355) 00:17:21.063 fused_ordering(356) 00:17:21.063 fused_ordering(357) 00:17:21.063 fused_ordering(358) 00:17:21.063 fused_ordering(359) 00:17:21.063 fused_ordering(360) 00:17:21.063 fused_ordering(361) 00:17:21.063 fused_ordering(362) 00:17:21.063 fused_ordering(363) 00:17:21.063 fused_ordering(364) 00:17:21.063 fused_ordering(365) 00:17:21.063 fused_ordering(366) 00:17:21.063 fused_ordering(367) 00:17:21.063 fused_ordering(368) 00:17:21.063 fused_ordering(369) 00:17:21.063 fused_ordering(370) 00:17:21.063 fused_ordering(371) 00:17:21.063 fused_ordering(372) 00:17:21.063 fused_ordering(373) 00:17:21.063 fused_ordering(374) 00:17:21.063 fused_ordering(375) 00:17:21.063 fused_ordering(376) 00:17:21.063 fused_ordering(377) 00:17:21.063 fused_ordering(378) 00:17:21.063 fused_ordering(379) 00:17:21.063 fused_ordering(380) 00:17:21.063 fused_ordering(381) 00:17:21.063 fused_ordering(382) 00:17:21.063 fused_ordering(383) 00:17:21.063 fused_ordering(384) 00:17:21.063 fused_ordering(385) 00:17:21.063 fused_ordering(386) 00:17:21.063 fused_ordering(387) 00:17:21.063 fused_ordering(388) 00:17:21.063 fused_ordering(389) 00:17:21.063 fused_ordering(390) 00:17:21.063 fused_ordering(391) 00:17:21.063 fused_ordering(392) 00:17:21.063 fused_ordering(393) 00:17:21.063 fused_ordering(394) 00:17:21.063 fused_ordering(395) 00:17:21.063 fused_ordering(396) 00:17:21.063 fused_ordering(397) 00:17:21.063 fused_ordering(398) 00:17:21.063 fused_ordering(399) 00:17:21.063 fused_ordering(400) 00:17:21.063 fused_ordering(401) 00:17:21.063 fused_ordering(402) 00:17:21.063 fused_ordering(403) 00:17:21.063 fused_ordering(404) 00:17:21.063 fused_ordering(405) 00:17:21.063 fused_ordering(406) 00:17:21.063 fused_ordering(407) 00:17:21.063 fused_ordering(408) 00:17:21.063 fused_ordering(409) 00:17:21.063 fused_ordering(410) 00:17:21.631 fused_ordering(411) 00:17:21.631 fused_ordering(412) 00:17:21.631 fused_ordering(413) 00:17:21.631 fused_ordering(414) 00:17:21.631 fused_ordering(415) 00:17:21.631 fused_ordering(416) 00:17:21.631 fused_ordering(417) 00:17:21.631 fused_ordering(418) 00:17:21.631 fused_ordering(419) 00:17:21.631 fused_ordering(420) 00:17:21.631 fused_ordering(421) 00:17:21.631 fused_ordering(422) 00:17:21.631 fused_ordering(423) 00:17:21.631 fused_ordering(424) 00:17:21.631 fused_ordering(425) 00:17:21.631 fused_ordering(426) 00:17:21.631 fused_ordering(427) 00:17:21.631 fused_ordering(428) 00:17:21.631 fused_ordering(429) 00:17:21.631 fused_ordering(430) 00:17:21.631 fused_ordering(431) 00:17:21.631 fused_ordering(432) 00:17:21.631 fused_ordering(433) 00:17:21.631 fused_ordering(434) 00:17:21.631 fused_ordering(435) 00:17:21.631 fused_ordering(436) 00:17:21.631 fused_ordering(437) 00:17:21.631 fused_ordering(438) 00:17:21.631 fused_ordering(439) 00:17:21.631 fused_ordering(440) 00:17:21.631 fused_ordering(441) 00:17:21.631 fused_ordering(442) 00:17:21.631 fused_ordering(443) 00:17:21.631 fused_ordering(444) 00:17:21.631 fused_ordering(445) 00:17:21.631 fused_ordering(446) 00:17:21.631 fused_ordering(447) 00:17:21.631 fused_ordering(448) 00:17:21.631 fused_ordering(449) 00:17:21.631 fused_ordering(450) 00:17:21.631 fused_ordering(451) 00:17:21.631 fused_ordering(452) 00:17:21.631 fused_ordering(453) 00:17:21.631 fused_ordering(454) 00:17:21.631 fused_ordering(455) 00:17:21.631 fused_ordering(456) 00:17:21.631 fused_ordering(457) 00:17:21.631 fused_ordering(458) 00:17:21.631 fused_ordering(459) 00:17:21.631 fused_ordering(460) 00:17:21.631 fused_ordering(461) 00:17:21.631 fused_ordering(462) 00:17:21.631 fused_ordering(463) 00:17:21.631 fused_ordering(464) 00:17:21.631 fused_ordering(465) 00:17:21.631 fused_ordering(466) 00:17:21.631 fused_ordering(467) 00:17:21.631 fused_ordering(468) 00:17:21.631 fused_ordering(469) 00:17:21.631 fused_ordering(470) 00:17:21.631 fused_ordering(471) 00:17:21.631 fused_ordering(472) 00:17:21.631 fused_ordering(473) 00:17:21.631 fused_ordering(474) 00:17:21.631 fused_ordering(475) 00:17:21.631 fused_ordering(476) 00:17:21.631 fused_ordering(477) 00:17:21.631 fused_ordering(478) 00:17:21.631 fused_ordering(479) 00:17:21.631 fused_ordering(480) 00:17:21.631 fused_ordering(481) 00:17:21.631 fused_ordering(482) 00:17:21.631 fused_ordering(483) 00:17:21.631 fused_ordering(484) 00:17:21.631 fused_ordering(485) 00:17:21.631 fused_ordering(486) 00:17:21.631 fused_ordering(487) 00:17:21.631 fused_ordering(488) 00:17:21.631 fused_ordering(489) 00:17:21.631 fused_ordering(490) 00:17:21.631 fused_ordering(491) 00:17:21.631 fused_ordering(492) 00:17:21.631 fused_ordering(493) 00:17:21.631 fused_ordering(494) 00:17:21.631 fused_ordering(495) 00:17:21.631 fused_ordering(496) 00:17:21.631 fused_ordering(497) 00:17:21.631 fused_ordering(498) 00:17:21.631 fused_ordering(499) 00:17:21.631 fused_ordering(500) 00:17:21.631 fused_ordering(501) 00:17:21.631 fused_ordering(502) 00:17:21.631 fused_ordering(503) 00:17:21.631 fused_ordering(504) 00:17:21.631 fused_ordering(505) 00:17:21.631 fused_ordering(506) 00:17:21.631 fused_ordering(507) 00:17:21.631 fused_ordering(508) 00:17:21.631 fused_ordering(509) 00:17:21.631 fused_ordering(510) 00:17:21.631 fused_ordering(511) 00:17:21.631 fused_ordering(512) 00:17:21.631 fused_ordering(513) 00:17:21.631 fused_ordering(514) 00:17:21.631 fused_ordering(515) 00:17:21.631 fused_ordering(516) 00:17:21.631 fused_ordering(517) 00:17:21.631 fused_ordering(518) 00:17:21.631 fused_ordering(519) 00:17:21.631 fused_ordering(520) 00:17:21.631 fused_ordering(521) 00:17:21.631 fused_ordering(522) 00:17:21.631 fused_ordering(523) 00:17:21.631 fused_ordering(524) 00:17:21.631 fused_ordering(525) 00:17:21.631 fused_ordering(526) 00:17:21.631 fused_ordering(527) 00:17:21.632 fused_ordering(528) 00:17:21.632 fused_ordering(529) 00:17:21.632 fused_ordering(530) 00:17:21.632 fused_ordering(531) 00:17:21.632 fused_ordering(532) 00:17:21.632 fused_ordering(533) 00:17:21.632 fused_ordering(534) 00:17:21.632 fused_ordering(535) 00:17:21.632 fused_ordering(536) 00:17:21.632 fused_ordering(537) 00:17:21.632 fused_ordering(538) 00:17:21.632 fused_ordering(539) 00:17:21.632 fused_ordering(540) 00:17:21.632 fused_ordering(541) 00:17:21.632 fused_ordering(542) 00:17:21.632 fused_ordering(543) 00:17:21.632 fused_ordering(544) 00:17:21.632 fused_ordering(545) 00:17:21.632 fused_ordering(546) 00:17:21.632 fused_ordering(547) 00:17:21.632 fused_ordering(548) 00:17:21.632 fused_ordering(549) 00:17:21.632 fused_ordering(550) 00:17:21.632 fused_ordering(551) 00:17:21.632 fused_ordering(552) 00:17:21.632 fused_ordering(553) 00:17:21.632 fused_ordering(554) 00:17:21.632 fused_ordering(555) 00:17:21.632 fused_ordering(556) 00:17:21.632 fused_ordering(557) 00:17:21.632 fused_ordering(558) 00:17:21.632 fused_ordering(559) 00:17:21.632 fused_ordering(560) 00:17:21.632 fused_ordering(561) 00:17:21.632 fused_ordering(562) 00:17:21.632 fused_ordering(563) 00:17:21.632 fused_ordering(564) 00:17:21.632 fused_ordering(565) 00:17:21.632 fused_ordering(566) 00:17:21.632 fused_ordering(567) 00:17:21.632 fused_ordering(568) 00:17:21.632 fused_ordering(569) 00:17:21.632 fused_ordering(570) 00:17:21.632 fused_ordering(571) 00:17:21.632 fused_ordering(572) 00:17:21.632 fused_ordering(573) 00:17:21.632 fused_ordering(574) 00:17:21.632 fused_ordering(575) 00:17:21.632 fused_ordering(576) 00:17:21.632 fused_ordering(577) 00:17:21.632 fused_ordering(578) 00:17:21.632 fused_ordering(579) 00:17:21.632 fused_ordering(580) 00:17:21.632 fused_ordering(581) 00:17:21.632 fused_ordering(582) 00:17:21.632 fused_ordering(583) 00:17:21.632 fused_ordering(584) 00:17:21.632 fused_ordering(585) 00:17:21.632 fused_ordering(586) 00:17:21.632 fused_ordering(587) 00:17:21.632 fused_ordering(588) 00:17:21.632 fused_ordering(589) 00:17:21.632 fused_ordering(590) 00:17:21.632 fused_ordering(591) 00:17:21.632 fused_ordering(592) 00:17:21.632 fused_ordering(593) 00:17:21.632 fused_ordering(594) 00:17:21.632 fused_ordering(595) 00:17:21.632 fused_ordering(596) 00:17:21.632 fused_ordering(597) 00:17:21.632 fused_ordering(598) 00:17:21.632 fused_ordering(599) 00:17:21.632 fused_ordering(600) 00:17:21.632 fused_ordering(601) 00:17:21.632 fused_ordering(602) 00:17:21.632 fused_ordering(603) 00:17:21.632 fused_ordering(604) 00:17:21.632 fused_ordering(605) 00:17:21.632 fused_ordering(606) 00:17:21.632 fused_ordering(607) 00:17:21.632 fused_ordering(608) 00:17:21.632 fused_ordering(609) 00:17:21.632 fused_ordering(610) 00:17:21.632 fused_ordering(611) 00:17:21.632 fused_ordering(612) 00:17:21.632 fused_ordering(613) 00:17:21.632 fused_ordering(614) 00:17:21.632 fused_ordering(615) 00:17:21.892 fused_ordering(616) 00:17:21.892 fused_ordering(617) 00:17:21.892 fused_ordering(618) 00:17:21.892 fused_ordering(619) 00:17:21.892 fused_ordering(620) 00:17:21.892 fused_ordering(621) 00:17:21.892 fused_ordering(622) 00:17:21.892 fused_ordering(623) 00:17:21.892 fused_ordering(624) 00:17:21.892 fused_ordering(625) 00:17:21.892 fused_ordering(626) 00:17:21.892 fused_ordering(627) 00:17:21.892 fused_ordering(628) 00:17:21.892 fused_ordering(629) 00:17:21.892 fused_ordering(630) 00:17:21.892 fused_ordering(631) 00:17:21.892 fused_ordering(632) 00:17:21.892 fused_ordering(633) 00:17:21.892 fused_ordering(634) 00:17:21.892 fused_ordering(635) 00:17:21.892 fused_ordering(636) 00:17:21.892 fused_ordering(637) 00:17:21.892 fused_ordering(638) 00:17:21.892 fused_ordering(639) 00:17:21.892 fused_ordering(640) 00:17:21.892 fused_ordering(641) 00:17:21.892 fused_ordering(642) 00:17:21.892 fused_ordering(643) 00:17:21.892 fused_ordering(644) 00:17:21.892 fused_ordering(645) 00:17:21.892 fused_ordering(646) 00:17:21.892 fused_ordering(647) 00:17:21.892 fused_ordering(648) 00:17:21.892 fused_ordering(649) 00:17:21.892 fused_ordering(650) 00:17:21.892 fused_ordering(651) 00:17:21.892 fused_ordering(652) 00:17:21.892 fused_ordering(653) 00:17:21.892 fused_ordering(654) 00:17:21.892 fused_ordering(655) 00:17:21.892 fused_ordering(656) 00:17:21.892 fused_ordering(657) 00:17:21.892 fused_ordering(658) 00:17:21.892 fused_ordering(659) 00:17:21.892 fused_ordering(660) 00:17:21.892 fused_ordering(661) 00:17:21.892 fused_ordering(662) 00:17:21.892 fused_ordering(663) 00:17:21.892 fused_ordering(664) 00:17:21.892 fused_ordering(665) 00:17:21.892 fused_ordering(666) 00:17:21.892 fused_ordering(667) 00:17:21.892 fused_ordering(668) 00:17:21.892 fused_ordering(669) 00:17:21.892 fused_ordering(670) 00:17:21.892 fused_ordering(671) 00:17:21.892 fused_ordering(672) 00:17:21.892 fused_ordering(673) 00:17:21.892 fused_ordering(674) 00:17:21.892 fused_ordering(675) 00:17:21.892 fused_ordering(676) 00:17:21.892 fused_ordering(677) 00:17:21.892 fused_ordering(678) 00:17:21.892 fused_ordering(679) 00:17:21.892 fused_ordering(680) 00:17:21.892 fused_ordering(681) 00:17:21.892 fused_ordering(682) 00:17:21.892 fused_ordering(683) 00:17:21.892 fused_ordering(684) 00:17:21.892 fused_ordering(685) 00:17:21.892 fused_ordering(686) 00:17:21.892 fused_ordering(687) 00:17:21.892 fused_ordering(688) 00:17:21.892 fused_ordering(689) 00:17:21.892 fused_ordering(690) 00:17:21.892 fused_ordering(691) 00:17:21.892 fused_ordering(692) 00:17:21.892 fused_ordering(693) 00:17:21.892 fused_ordering(694) 00:17:21.892 fused_ordering(695) 00:17:21.892 fused_ordering(696) 00:17:21.892 fused_ordering(697) 00:17:21.892 fused_ordering(698) 00:17:21.892 fused_ordering(699) 00:17:21.892 fused_ordering(700) 00:17:21.892 fused_ordering(701) 00:17:21.892 fused_ordering(702) 00:17:21.892 fused_ordering(703) 00:17:21.892 fused_ordering(704) 00:17:21.892 fused_ordering(705) 00:17:21.892 fused_ordering(706) 00:17:21.892 fused_ordering(707) 00:17:21.892 fused_ordering(708) 00:17:21.892 fused_ordering(709) 00:17:21.892 fused_ordering(710) 00:17:21.892 fused_ordering(711) 00:17:21.892 fused_ordering(712) 00:17:21.892 fused_ordering(713) 00:17:21.892 fused_ordering(714) 00:17:21.892 fused_ordering(715) 00:17:21.892 fused_ordering(716) 00:17:21.892 fused_ordering(717) 00:17:21.892 fused_ordering(718) 00:17:21.892 fused_ordering(719) 00:17:21.892 fused_ordering(720) 00:17:21.892 fused_ordering(721) 00:17:21.892 fused_ordering(722) 00:17:21.892 fused_ordering(723) 00:17:21.892 fused_ordering(724) 00:17:21.892 fused_ordering(725) 00:17:21.892 fused_ordering(726) 00:17:21.892 fused_ordering(727) 00:17:21.892 fused_ordering(728) 00:17:21.892 fused_ordering(729) 00:17:21.892 fused_ordering(730) 00:17:21.892 fused_ordering(731) 00:17:21.892 fused_ordering(732) 00:17:21.892 fused_ordering(733) 00:17:21.892 fused_ordering(734) 00:17:21.892 fused_ordering(735) 00:17:21.892 fused_ordering(736) 00:17:21.892 fused_ordering(737) 00:17:21.892 fused_ordering(738) 00:17:21.892 fused_ordering(739) 00:17:21.892 fused_ordering(740) 00:17:21.892 fused_ordering(741) 00:17:21.892 fused_ordering(742) 00:17:21.892 fused_ordering(743) 00:17:21.892 fused_ordering(744) 00:17:21.892 fused_ordering(745) 00:17:21.892 fused_ordering(746) 00:17:21.892 fused_ordering(747) 00:17:21.892 fused_ordering(748) 00:17:21.892 fused_ordering(749) 00:17:21.892 fused_ordering(750) 00:17:21.892 fused_ordering(751) 00:17:21.892 fused_ordering(752) 00:17:21.892 fused_ordering(753) 00:17:21.892 fused_ordering(754) 00:17:21.892 fused_ordering(755) 00:17:21.892 fused_ordering(756) 00:17:21.892 fused_ordering(757) 00:17:21.892 fused_ordering(758) 00:17:21.892 fused_ordering(759) 00:17:21.892 fused_ordering(760) 00:17:21.892 fused_ordering(761) 00:17:21.892 fused_ordering(762) 00:17:21.892 fused_ordering(763) 00:17:21.892 fused_ordering(764) 00:17:21.892 fused_ordering(765) 00:17:21.892 fused_ordering(766) 00:17:21.892 fused_ordering(767) 00:17:21.892 fused_ordering(768) 00:17:21.892 fused_ordering(769) 00:17:21.892 fused_ordering(770) 00:17:21.892 fused_ordering(771) 00:17:21.892 fused_ordering(772) 00:17:21.892 fused_ordering(773) 00:17:21.892 fused_ordering(774) 00:17:21.892 fused_ordering(775) 00:17:21.892 fused_ordering(776) 00:17:21.892 fused_ordering(777) 00:17:21.892 fused_ordering(778) 00:17:21.892 fused_ordering(779) 00:17:21.892 fused_ordering(780) 00:17:21.892 fused_ordering(781) 00:17:21.892 fused_ordering(782) 00:17:21.892 fused_ordering(783) 00:17:21.892 fused_ordering(784) 00:17:21.892 fused_ordering(785) 00:17:21.892 fused_ordering(786) 00:17:21.892 fused_ordering(787) 00:17:21.892 fused_ordering(788) 00:17:21.892 fused_ordering(789) 00:17:21.892 fused_ordering(790) 00:17:21.892 fused_ordering(791) 00:17:21.892 fused_ordering(792) 00:17:21.892 fused_ordering(793) 00:17:21.892 fused_ordering(794) 00:17:21.892 fused_ordering(795) 00:17:21.892 fused_ordering(796) 00:17:21.892 fused_ordering(797) 00:17:21.892 fused_ordering(798) 00:17:21.892 fused_ordering(799) 00:17:21.892 fused_ordering(800) 00:17:21.892 fused_ordering(801) 00:17:21.892 fused_ordering(802) 00:17:21.892 fused_ordering(803) 00:17:21.892 fused_ordering(804) 00:17:21.892 fused_ordering(805) 00:17:21.892 fused_ordering(806) 00:17:21.892 fused_ordering(807) 00:17:21.892 fused_ordering(808) 00:17:21.892 fused_ordering(809) 00:17:21.892 fused_ordering(810) 00:17:21.892 fused_ordering(811) 00:17:21.892 fused_ordering(812) 00:17:21.892 fused_ordering(813) 00:17:21.892 fused_ordering(814) 00:17:21.892 fused_ordering(815) 00:17:21.892 fused_ordering(816) 00:17:21.892 fused_ordering(817) 00:17:21.892 fused_ordering(818) 00:17:21.892 fused_ordering(819) 00:17:21.892 fused_ordering(820) 00:17:22.461 fused_ordering(821) 00:17:22.461 fused_ordering(822) 00:17:22.461 fused_ordering(823) 00:17:22.461 fused_ordering(824) 00:17:22.461 fused_ordering(825) 00:17:22.461 fused_ordering(826) 00:17:22.461 fused_ordering(827) 00:17:22.461 fused_ordering(828) 00:17:22.461 fused_ordering(829) 00:17:22.461 fused_ordering(830) 00:17:22.461 fused_ordering(831) 00:17:22.461 fused_ordering(832) 00:17:22.461 fused_ordering(833) 00:17:22.461 fused_ordering(834) 00:17:22.461 fused_ordering(835) 00:17:22.461 fused_ordering(836) 00:17:22.461 fused_ordering(837) 00:17:22.461 fused_ordering(838) 00:17:22.461 fused_ordering(839) 00:17:22.461 fused_ordering(840) 00:17:22.461 fused_ordering(841) 00:17:22.461 fused_ordering(842) 00:17:22.461 fused_ordering(843) 00:17:22.461 fused_ordering(844) 00:17:22.461 fused_ordering(845) 00:17:22.461 fused_ordering(846) 00:17:22.461 fused_ordering(847) 00:17:22.461 fused_ordering(848) 00:17:22.461 fused_ordering(849) 00:17:22.461 fused_ordering(850) 00:17:22.461 fused_ordering(851) 00:17:22.461 fused_ordering(852) 00:17:22.461 fused_ordering(853) 00:17:22.461 fused_ordering(854) 00:17:22.461 fused_ordering(855) 00:17:22.461 fused_ordering(856) 00:17:22.461 fused_ordering(857) 00:17:22.461 fused_ordering(858) 00:17:22.461 fused_ordering(859) 00:17:22.461 fused_ordering(860) 00:17:22.461 fused_ordering(861) 00:17:22.461 fused_ordering(862) 00:17:22.461 fused_ordering(863) 00:17:22.461 fused_ordering(864) 00:17:22.461 fused_ordering(865) 00:17:22.461 fused_ordering(866) 00:17:22.461 fused_ordering(867) 00:17:22.461 fused_ordering(868) 00:17:22.461 fused_ordering(869) 00:17:22.461 fused_ordering(870) 00:17:22.461 fused_ordering(871) 00:17:22.461 fused_ordering(872) 00:17:22.461 fused_ordering(873) 00:17:22.461 fused_ordering(874) 00:17:22.461 fused_ordering(875) 00:17:22.461 fused_ordering(876) 00:17:22.461 fused_ordering(877) 00:17:22.461 fused_ordering(878) 00:17:22.461 fused_ordering(879) 00:17:22.461 fused_ordering(880) 00:17:22.461 fused_ordering(881) 00:17:22.461 fused_ordering(882) 00:17:22.461 fused_ordering(883) 00:17:22.461 fused_ordering(884) 00:17:22.461 fused_ordering(885) 00:17:22.461 fused_ordering(886) 00:17:22.461 fused_ordering(887) 00:17:22.461 fused_ordering(888) 00:17:22.461 fused_ordering(889) 00:17:22.461 fused_ordering(890) 00:17:22.461 fused_ordering(891) 00:17:22.461 fused_ordering(892) 00:17:22.461 fused_ordering(893) 00:17:22.461 fused_ordering(894) 00:17:22.461 fused_ordering(895) 00:17:22.461 fused_ordering(896) 00:17:22.461 fused_ordering(897) 00:17:22.461 fused_ordering(898) 00:17:22.461 fused_ordering(899) 00:17:22.461 fused_ordering(900) 00:17:22.461 fused_ordering(901) 00:17:22.461 fused_ordering(902) 00:17:22.461 fused_ordering(903) 00:17:22.461 fused_ordering(904) 00:17:22.461 fused_ordering(905) 00:17:22.461 fused_ordering(906) 00:17:22.461 fused_ordering(907) 00:17:22.461 fused_ordering(908) 00:17:22.461 fused_ordering(909) 00:17:22.461 fused_ordering(910) 00:17:22.461 fused_ordering(911) 00:17:22.461 fused_ordering(912) 00:17:22.461 fused_ordering(913) 00:17:22.461 fused_ordering(914) 00:17:22.461 fused_ordering(915) 00:17:22.461 fused_ordering(916) 00:17:22.461 fused_ordering(917) 00:17:22.461 fused_ordering(918) 00:17:22.461 fused_ordering(919) 00:17:22.461 fused_ordering(920) 00:17:22.461 fused_ordering(921) 00:17:22.461 fused_ordering(922) 00:17:22.461 fused_ordering(923) 00:17:22.461 fused_ordering(924) 00:17:22.461 fused_ordering(925) 00:17:22.461 fused_ordering(926) 00:17:22.461 fused_ordering(927) 00:17:22.461 fused_ordering(928) 00:17:22.461 fused_ordering(929) 00:17:22.461 fused_ordering(930) 00:17:22.461 fused_ordering(931) 00:17:22.461 fused_ordering(932) 00:17:22.461 fused_ordering(933) 00:17:22.461 fused_ordering(934) 00:17:22.461 fused_ordering(935) 00:17:22.461 fused_ordering(936) 00:17:22.461 fused_ordering(937) 00:17:22.461 fused_ordering(938) 00:17:22.461 fused_ordering(939) 00:17:22.461 fused_ordering(940) 00:17:22.461 fused_ordering(941) 00:17:22.461 fused_ordering(942) 00:17:22.461 fused_ordering(943) 00:17:22.461 fused_ordering(944) 00:17:22.461 fused_ordering(945) 00:17:22.461 fused_ordering(946) 00:17:22.461 fused_ordering(947) 00:17:22.461 fused_ordering(948) 00:17:22.461 fused_ordering(949) 00:17:22.461 fused_ordering(950) 00:17:22.461 fused_ordering(951) 00:17:22.461 fused_ordering(952) 00:17:22.461 fused_ordering(953) 00:17:22.461 fused_ordering(954) 00:17:22.461 fused_ordering(955) 00:17:22.461 fused_ordering(956) 00:17:22.461 fused_ordering(957) 00:17:22.461 fused_ordering(958) 00:17:22.461 fused_ordering(959) 00:17:22.461 fused_ordering(960) 00:17:22.461 fused_ordering(961) 00:17:22.461 fused_ordering(962) 00:17:22.461 fused_ordering(963) 00:17:22.461 fused_ordering(964) 00:17:22.461 fused_ordering(965) 00:17:22.461 fused_ordering(966) 00:17:22.461 fused_ordering(967) 00:17:22.461 fused_ordering(968) 00:17:22.461 fused_ordering(969) 00:17:22.461 fused_ordering(970) 00:17:22.461 fused_ordering(971) 00:17:22.461 fused_ordering(972) 00:17:22.461 fused_ordering(973) 00:17:22.461 fused_ordering(974) 00:17:22.461 fused_ordering(975) 00:17:22.461 fused_ordering(976) 00:17:22.461 fused_ordering(977) 00:17:22.461 fused_ordering(978) 00:17:22.461 fused_ordering(979) 00:17:22.461 fused_ordering(980) 00:17:22.461 fused_ordering(981) 00:17:22.461 fused_ordering(982) 00:17:22.461 fused_ordering(983) 00:17:22.461 fused_ordering(984) 00:17:22.461 fused_ordering(985) 00:17:22.461 fused_ordering(986) 00:17:22.461 fused_ordering(987) 00:17:22.461 fused_ordering(988) 00:17:22.461 fused_ordering(989) 00:17:22.461 fused_ordering(990) 00:17:22.461 fused_ordering(991) 00:17:22.461 fused_ordering(992) 00:17:22.461 fused_ordering(993) 00:17:22.461 fused_ordering(994) 00:17:22.461 fused_ordering(995) 00:17:22.461 fused_ordering(996) 00:17:22.461 fused_ordering(997) 00:17:22.461 fused_ordering(998) 00:17:22.461 fused_ordering(999) 00:17:22.461 fused_ordering(1000) 00:17:22.461 fused_ordering(1001) 00:17:22.461 fused_ordering(1002) 00:17:22.461 fused_ordering(1003) 00:17:22.461 fused_ordering(1004) 00:17:22.461 fused_ordering(1005) 00:17:22.461 fused_ordering(1006) 00:17:22.461 fused_ordering(1007) 00:17:22.461 fused_ordering(1008) 00:17:22.461 fused_ordering(1009) 00:17:22.461 fused_ordering(1010) 00:17:22.461 fused_ordering(1011) 00:17:22.461 fused_ordering(1012) 00:17:22.461 fused_ordering(1013) 00:17:22.461 fused_ordering(1014) 00:17:22.461 fused_ordering(1015) 00:17:22.461 fused_ordering(1016) 00:17:22.461 fused_ordering(1017) 00:17:22.461 fused_ordering(1018) 00:17:22.461 fused_ordering(1019) 00:17:22.461 fused_ordering(1020) 00:17:22.461 fused_ordering(1021) 00:17:22.461 fused_ordering(1022) 00:17:22.461 fused_ordering(1023) 00:17:22.461 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:22.461 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:22.461 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.461 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:22.461 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.461 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:22.462 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.462 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.462 rmmod nvme_tcp 00:17:22.722 rmmod nvme_fabrics 00:17:22.722 rmmod nvme_keyring 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75580 ']' 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75580 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75580 ']' 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75580 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75580 00:17:22.722 killing process with pid 75580 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75580' 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75580 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75580 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:22.722 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:17:22.981 00:17:22.981 real 0m3.891s 00:17:22.981 user 0m4.349s 00:17:22.981 sys 0m1.484s 00:17:22.981 ************************************ 00:17:22.981 END TEST nvmf_fused_ordering 00:17:22.981 ************************************ 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.981 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.241 ************************************ 00:17:23.241 START TEST nvmf_ns_masking 00:17:23.241 ************************************ 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:23.241 * Looking for test storage... 00:17:23.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.241 --rc genhtml_branch_coverage=1 00:17:23.241 --rc genhtml_function_coverage=1 00:17:23.241 --rc genhtml_legend=1 00:17:23.241 --rc geninfo_all_blocks=1 00:17:23.241 --rc geninfo_unexecuted_blocks=1 00:17:23.241 00:17:23.241 ' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.241 --rc genhtml_branch_coverage=1 00:17:23.241 --rc genhtml_function_coverage=1 00:17:23.241 --rc genhtml_legend=1 00:17:23.241 --rc geninfo_all_blocks=1 00:17:23.241 --rc geninfo_unexecuted_blocks=1 00:17:23.241 00:17:23.241 ' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.241 --rc genhtml_branch_coverage=1 00:17:23.241 --rc genhtml_function_coverage=1 00:17:23.241 --rc genhtml_legend=1 00:17:23.241 --rc geninfo_all_blocks=1 00:17:23.241 --rc geninfo_unexecuted_blocks=1 00:17:23.241 00:17:23.241 ' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.241 --rc genhtml_branch_coverage=1 00:17:23.241 --rc genhtml_function_coverage=1 00:17:23.241 --rc genhtml_legend=1 00:17:23.241 --rc geninfo_all_blocks=1 00:17:23.241 --rc geninfo_unexecuted_blocks=1 00:17:23.241 00:17:23.241 ' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.241 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.242 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:23.242 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9882c6c7-a01f-44c1-9cd5-b48b23dad2f3 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ea75ce09-d906-4ffb-9a53-2cd533ce3717 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f8641142-d303-4044-9ef8-692f821a55ef 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.500 Cannot find device "nvmf_init_br" 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.500 Cannot find device "nvmf_init_br2" 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.500 Cannot find device "nvmf_tgt_br" 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:17:23.500 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.501 Cannot find device "nvmf_tgt_br2" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.501 Cannot find device "nvmf_init_br" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.501 Cannot find device "nvmf_init_br2" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.501 Cannot find device "nvmf_tgt_br" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.501 Cannot find device "nvmf_tgt_br2" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.501 Cannot find device "nvmf_br" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.501 Cannot find device "nvmf_init_if" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:23.501 Cannot find device "nvmf_init_if2" 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.501 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.501 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:23.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 00:17:23.759 00:17:23.759 --- 10.0.0.3 ping statistics --- 00:17:23.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.759 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:23.759 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:23.759 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:23.759 00:17:23.759 --- 10.0.0.4 ping statistics --- 00:17:23.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.759 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:23.759 00:17:23.759 --- 10.0.0.1 ping statistics --- 00:17:23.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.759 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:23.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:23.759 00:17:23.759 --- 10.0.0.2 ping statistics --- 00:17:23.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.759 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.759 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75864 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75864 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75864 ']' 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.760 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:24.019 [2024-12-06 13:11:30.295892] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:24.019 [2024-12-06 13:11:30.296011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.019 [2024-12-06 13:11:30.447444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.019 [2024-12-06 13:11:30.480022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.019 [2024-12-06 13:11:30.480093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.019 [2024-12-06 13:11:30.480121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.019 [2024-12-06 13:11:30.480130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.019 [2024-12-06 13:11:30.480137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.019 [2024-12-06 13:11:30.480442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.278 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.278 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:24.278 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.278 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.278 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:24.278 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.278 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:24.536 [2024-12-06 13:11:30.896638] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.537 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:24.537 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:24.537 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:24.795 Malloc1 00:17:24.795 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:25.054 Malloc2 00:17:25.054 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:25.312 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:25.570 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:25.828 [2024-12-06 13:11:32.333481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:26.086 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:26.086 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f8641142-d303-4044-9ef8-692f821a55ef -a 10.0.0.3 -s 4420 -i 4 00:17:26.086 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.086 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.086 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.086 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:26.086 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:27.990 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.249 [ 0]:0x1 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=631fb96e6ee444859520fba4c4aa931c 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 631fb96e6ee444859520fba4c4aa931c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.249 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:28.507 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:28.507 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.507 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:28.507 [ 0]:0x1 00:17:28.508 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:28.508 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=631fb96e6ee444859520fba4c4aa931c 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 631fb96e6ee444859520fba4c4aa931c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:28.822 [ 1]:0x2 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc311e29bcdb41ffabc1262027d239ac 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc311e29bcdb41ffabc1262027d239ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.822 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:29.339 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:29.339 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f8641142-d303-4044-9ef8-692f821a55ef -a 10.0.0.3 -s 4420 -i 4 00:17:29.598 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:29.598 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:29.598 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.598 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:29.598 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:29.598 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.501 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.501 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.501 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.501 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:31.502 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.502 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:31.760 [ 0]:0x2 00:17:31.760 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:31.760 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.760 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc311e29bcdb41ffabc1262027d239ac 00:17:31.760 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc311e29bcdb41ffabc1262027d239ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.760 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:32.019 [ 0]:0x1 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=631fb96e6ee444859520fba4c4aa931c 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 631fb96e6ee444859520fba4c4aa931c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:32.019 [ 1]:0x2 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc311e29bcdb41ffabc1262027d239ac 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc311e29bcdb41ffabc1262027d239ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:32.019 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.610 [ 0]:0x2 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc311e29bcdb41ffabc1262027d239ac 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc311e29bcdb41ffabc1262027d239ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.610 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:32.868 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:32.868 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f8641142-d303-4044-9ef8-692f821a55ef -a 10.0.0.3 -s 4420 -i 4 00:17:33.126 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:33.126 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:33.126 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.126 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:33.126 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:33.126 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:35.026 [ 0]:0x1 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=631fb96e6ee444859520fba4c4aa931c 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 631fb96e6ee444859520fba4c4aa931c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:35.026 [ 1]:0x2 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:35.026 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.285 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc311e29bcdb41ffabc1262027d239ac 00:17:35.285 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc311e29bcdb41ffabc1262027d239ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.285 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:35.543 [ 0]:0x2 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc311e29bcdb41ffabc1262027d239ac 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc311e29bcdb41ffabc1262027d239ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:35.543 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:35.801 [2024-12-06 13:11:42.228662] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:35.801 2024/12/06 13:11:42 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:17:35.801 request: 00:17:35.801 { 00:17:35.801 "method": "nvmf_ns_remove_host", 00:17:35.801 "params": { 00:17:35.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.801 "nsid": 2, 00:17:35.801 "host": "nqn.2016-06.io.spdk:host1" 00:17:35.801 } 00:17:35.801 } 00:17:35.801 Got JSON-RPC error response 00:17:35.801 GoRPCClient: error on JSON-RPC call 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:35.801 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.058 [ 0]:0x2 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc311e29bcdb41ffabc1262027d239ac 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc311e29bcdb41ffabc1262027d239ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:36.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76232 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76232 /var/tmp/host.sock 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76232 ']' 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.058 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:36.058 [2024-12-06 13:11:42.483107] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:36.058 [2024-12-06 13:11:42.483195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76232 ] 00:17:36.316 [2024-12-06 13:11:42.632168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.316 [2024-12-06 13:11:42.691792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.645 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.645 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:36.645 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:36.919 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:37.177 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9882c6c7-a01f-44c1-9cd5-b48b23dad2f3 00:17:37.177 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:37.177 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9882C6C7A01F44C19CD5B48B23DAD2F3 -i 00:17:37.435 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ea75ce09-d906-4ffb-9a53-2cd533ce3717 00:17:37.435 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:37.435 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EA75CE09D9064FFB9A532CD533CE3717 -i 00:17:37.693 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:37.952 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:38.518 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:38.518 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:38.776 nvme0n1 00:17:38.776 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:38.776 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:39.034 nvme1n2 00:17:39.034 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:39.034 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:39.034 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:39.034 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:39.034 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:39.292 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:39.292 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:39.292 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:39.292 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:39.858 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9882c6c7-a01f-44c1-9cd5-b48b23dad2f3 == \9\8\8\2\c\6\c\7\-\a\0\1\f\-\4\4\c\1\-\9\c\d\5\-\b\4\8\b\2\3\d\a\d\2\f\3 ]] 00:17:39.858 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:39.858 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:39.858 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:40.116 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ea75ce09-d906-4ffb-9a53-2cd533ce3717 == \e\a\7\5\c\e\0\9\-\d\9\0\6\-\4\f\f\b\-\9\a\5\3\-\2\c\d\5\3\3\c\e\3\7\1\7 ]] 00:17:40.116 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.375 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:40.634 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9882c6c7-a01f-44c1-9cd5-b48b23dad2f3 00:17:40.634 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:40.634 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9882C6C7A01F44C19CD5B48B23DAD2F3 00:17:40.634 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9882C6C7A01F44C19CD5B48B23DAD2F3 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:40.635 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9882C6C7A01F44C19CD5B48B23DAD2F3 00:17:40.892 [2024-12-06 13:11:47.263025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:40.892 [2024-12-06 13:11:47.263087] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:40.892 [2024-12-06 13:11:47.263101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.892 2024/12/06 13:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:9882C6C7A01F44C19CD5B48B23DAD2F3 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:17:40.892 request: 00:17:40.892 { 00:17:40.892 "method": "nvmf_subsystem_add_ns", 00:17:40.892 "params": { 00:17:40.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.892 "namespace": { 00:17:40.892 "bdev_name": "invalid", 00:17:40.892 "nsid": 1, 00:17:40.892 "nguid": "9882C6C7A01F44C19CD5B48B23DAD2F3", 00:17:40.892 "no_auto_visible": false, 00:17:40.892 "hide_metadata": false 00:17:40.892 } 00:17:40.892 } 00:17:40.892 } 00:17:40.892 Got JSON-RPC error response 00:17:40.892 GoRPCClient: error on JSON-RPC call 00:17:40.892 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:40.892 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.892 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.892 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.892 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9882c6c7-a01f-44c1-9cd5-b48b23dad2f3 00:17:40.892 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:40.892 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9882C6C7A01F44C19CD5B48B23DAD2F3 -i 00:17:41.149 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:43.060 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:43.060 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:43.060 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76232 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76232 ']' 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76232 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76232 00:17:43.628 killing process with pid 76232 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76232' 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76232 00:17:43.628 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76232 00:17:43.628 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.196 rmmod nvme_tcp 00:17:44.196 rmmod nvme_fabrics 00:17:44.196 rmmod nvme_keyring 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75864 ']' 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75864 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75864 ']' 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75864 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75864 00:17:44.196 killing process with pid 75864 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75864' 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75864 00:17:44.196 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75864 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:44.455 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:44.714 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:17:44.714 ************************************ 00:17:44.714 END TEST nvmf_ns_masking 00:17:44.714 ************************************ 00:17:44.714 00:17:44.714 real 0m21.536s 00:17:44.714 user 0m37.055s 00:17:44.714 sys 0m3.101s 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.714 ************************************ 00:17:44.714 START TEST nvmf_auth_target 00:17:44.714 ************************************ 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:44.714 * Looking for test storage... 00:17:44.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:44.714 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:45.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.025 --rc genhtml_branch_coverage=1 00:17:45.025 --rc genhtml_function_coverage=1 00:17:45.025 --rc genhtml_legend=1 00:17:45.025 --rc geninfo_all_blocks=1 00:17:45.025 --rc geninfo_unexecuted_blocks=1 00:17:45.025 00:17:45.025 ' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:45.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.025 --rc genhtml_branch_coverage=1 00:17:45.025 --rc genhtml_function_coverage=1 00:17:45.025 --rc genhtml_legend=1 00:17:45.025 --rc geninfo_all_blocks=1 00:17:45.025 --rc geninfo_unexecuted_blocks=1 00:17:45.025 00:17:45.025 ' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:45.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.025 --rc genhtml_branch_coverage=1 00:17:45.025 --rc genhtml_function_coverage=1 00:17:45.025 --rc genhtml_legend=1 00:17:45.025 --rc geninfo_all_blocks=1 00:17:45.025 --rc geninfo_unexecuted_blocks=1 00:17:45.025 00:17:45.025 ' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:45.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.025 --rc genhtml_branch_coverage=1 00:17:45.025 --rc genhtml_function_coverage=1 00:17:45.025 --rc genhtml_legend=1 00:17:45.025 --rc geninfo_all_blocks=1 00:17:45.025 --rc geninfo_unexecuted_blocks=1 00:17:45.025 00:17:45.025 ' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.025 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:45.025 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:45.026 Cannot find device "nvmf_init_br" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:45.026 Cannot find device "nvmf_init_br2" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:45.026 Cannot find device "nvmf_tgt_br" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.026 Cannot find device "nvmf_tgt_br2" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:45.026 Cannot find device "nvmf_init_br" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:45.026 Cannot find device "nvmf_init_br2" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:45.026 Cannot find device "nvmf_tgt_br" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:45.026 Cannot find device "nvmf_tgt_br2" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:45.026 Cannot find device "nvmf_br" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:45.026 Cannot find device "nvmf_init_if" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:45.026 Cannot find device "nvmf_init_if2" 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:45.026 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:45.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:45.313 00:17:45.313 --- 10.0.0.3 ping statistics --- 00:17:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.313 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:45.313 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:45.313 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:17:45.313 00:17:45.313 --- 10.0.0.4 ping statistics --- 00:17:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.313 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:45.313 00:17:45.313 --- 10.0.0.1 ping statistics --- 00:17:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.313 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:45.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:17:45.313 00:17:45.313 --- 10.0.0.2 ping statistics --- 00:17:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.313 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76719 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76719 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76719 ']' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.313 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76763 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5d6299840c06e115513b0ac96249e7159167eaa352b159be 00:17:46.688 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aKm 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5d6299840c06e115513b0ac96249e7159167eaa352b159be 0 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5d6299840c06e115513b0ac96249e7159167eaa352b159be 0 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5d6299840c06e115513b0ac96249e7159167eaa352b159be 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aKm 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aKm 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.aKm 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=db235d2909926d909cce8b4aaea1dd25ace00692e485bfb1de752596e3b1cd70 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VQR 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key db235d2909926d909cce8b4aaea1dd25ace00692e485bfb1de752596e3b1cd70 3 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 db235d2909926d909cce8b4aaea1dd25ace00692e485bfb1de752596e3b1cd70 3 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=db235d2909926d909cce8b4aaea1dd25ace00692e485bfb1de752596e3b1cd70 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:46.689 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VQR 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VQR 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.VQR 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=380740d429acf3eed49603cbe990a60f 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Qnt 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 380740d429acf3eed49603cbe990a60f 1 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 380740d429acf3eed49603cbe990a60f 1 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=380740d429acf3eed49603cbe990a60f 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Qnt 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Qnt 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Qnt 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c66fa07b9dfbe33ddc7ccba6a2ad165e14be42cce0441ea7 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kQz 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c66fa07b9dfbe33ddc7ccba6a2ad165e14be42cce0441ea7 2 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c66fa07b9dfbe33ddc7ccba6a2ad165e14be42cce0441ea7 2 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c66fa07b9dfbe33ddc7ccba6a2ad165e14be42cce0441ea7 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kQz 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kQz 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.kQz 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=73ce9aff52f147ad8c3b847a6136ab73d5760781362e0870 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Y5K 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 73ce9aff52f147ad8c3b847a6136ab73d5760781362e0870 2 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 73ce9aff52f147ad8c3b847a6136ab73d5760781362e0870 2 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=73ce9aff52f147ad8c3b847a6136ab73d5760781362e0870 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:46.689 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Y5K 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Y5K 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Y5K 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8d207a07fc2a350cf2238d333152827c 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MQ3 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8d207a07fc2a350cf2238d333152827c 1 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8d207a07fc2a350cf2238d333152827c 1 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8d207a07fc2a350cf2238d333152827c 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MQ3 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MQ3 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.MQ3 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.948 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b9fd7746ed8b9081ef32e06475c4ea9483a847d1362b00edc60cc654a81c74d 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ivT 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b9fd7746ed8b9081ef32e06475c4ea9483a847d1362b00edc60cc654a81c74d 3 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b9fd7746ed8b9081ef32e06475c4ea9483a847d1362b00edc60cc654a81c74d 3 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b9fd7746ed8b9081ef32e06475c4ea9483a847d1362b00edc60cc654a81c74d 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ivT 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ivT 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ivT 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76719 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76719 ']' 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.949 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.207 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.207 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.208 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76763 /var/tmp/host.sock 00:17:47.208 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76763 ']' 00:17:47.208 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.208 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.208 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.208 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.208 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aKm 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aKm 00:17:47.466 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aKm 00:17:48.031 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.VQR ]] 00:17:48.031 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VQR 00:17:48.031 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.031 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.031 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.031 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VQR 00:17:48.031 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VQR 00:17:48.289 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:48.289 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Qnt 00:17:48.289 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.289 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.289 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.289 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Qnt 00:17:48.289 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Qnt 00:17:48.546 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.kQz ]] 00:17:48.546 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kQz 00:17:48.546 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.546 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.546 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.546 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kQz 00:17:48.546 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kQz 00:17:48.803 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:48.803 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Y5K 00:17:48.803 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.803 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.803 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.803 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Y5K 00:17:48.804 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Y5K 00:17:49.061 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.MQ3 ]] 00:17:49.061 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MQ3 00:17:49.061 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.061 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.061 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.061 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MQ3 00:17:49.061 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MQ3 00:17:49.319 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:49.319 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ivT 00:17:49.319 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.319 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.319 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.319 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ivT 00:17:49.319 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ivT 00:17:49.576 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:49.576 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:49.576 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.576 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.576 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.576 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.834 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.399 00:17:50.399 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.399 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.399 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.656 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.656 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.656 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.656 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.656 { 00:17:50.656 "auth": { 00:17:50.656 "dhgroup": "null", 00:17:50.656 "digest": "sha256", 00:17:50.656 "state": "completed" 00:17:50.656 }, 00:17:50.656 "cntlid": 1, 00:17:50.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:17:50.656 "listen_address": { 00:17:50.656 "adrfam": "IPv4", 00:17:50.656 "traddr": "10.0.0.3", 00:17:50.656 "trsvcid": "4420", 00:17:50.656 "trtype": "TCP" 00:17:50.656 }, 00:17:50.656 "peer_address": { 00:17:50.656 "adrfam": "IPv4", 00:17:50.656 "traddr": "10.0.0.1", 00:17:50.656 "trsvcid": "59722", 00:17:50.656 "trtype": "TCP" 00:17:50.656 }, 00:17:50.656 "qid": 0, 00:17:50.656 "state": "enabled", 00:17:50.656 "thread": "nvmf_tgt_poll_group_000" 00:17:50.656 } 00:17:50.656 ]' 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.656 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.221 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:17:51.221 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.490 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.490 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.749 { 00:17:56.749 "auth": { 00:17:56.749 "dhgroup": "null", 00:17:56.749 "digest": "sha256", 00:17:56.749 "state": "completed" 00:17:56.749 }, 00:17:56.749 "cntlid": 3, 00:17:56.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:17:56.749 "listen_address": { 00:17:56.749 "adrfam": "IPv4", 00:17:56.749 "traddr": "10.0.0.3", 00:17:56.749 "trsvcid": "4420", 00:17:56.749 "trtype": "TCP" 00:17:56.749 }, 00:17:56.749 "peer_address": { 00:17:56.749 "adrfam": "IPv4", 00:17:56.749 "traddr": "10.0.0.1", 00:17:56.749 "trsvcid": "46788", 00:17:56.749 "trtype": "TCP" 00:17:56.749 }, 00:17:56.749 "qid": 0, 00:17:56.749 "state": "enabled", 00:17:56.749 "thread": "nvmf_tgt_poll_group_000" 00:17:56.749 } 00:17:56.749 ]' 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.749 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.318 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:17:57.318 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:57.885 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.454 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.718 00:17:58.718 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.718 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.718 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.980 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.980 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.980 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.980 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.980 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.980 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.981 { 00:17:58.981 "auth": { 00:17:58.981 "dhgroup": "null", 00:17:58.981 "digest": "sha256", 00:17:58.981 "state": "completed" 00:17:58.981 }, 00:17:58.981 "cntlid": 5, 00:17:58.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:17:58.981 "listen_address": { 00:17:58.981 "adrfam": "IPv4", 00:17:58.981 "traddr": "10.0.0.3", 00:17:58.981 "trsvcid": "4420", 00:17:58.981 "trtype": "TCP" 00:17:58.981 }, 00:17:58.981 "peer_address": { 00:17:58.981 "adrfam": "IPv4", 00:17:58.981 "traddr": "10.0.0.1", 00:17:58.981 "trsvcid": "46804", 00:17:58.981 "trtype": "TCP" 00:17:58.981 }, 00:17:58.981 "qid": 0, 00:17:58.981 "state": "enabled", 00:17:58.981 "thread": "nvmf_tgt_poll_group_000" 00:17:58.981 } 00:17:58.981 ]' 00:17:58.981 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.981 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.981 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.981 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.981 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.239 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.239 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.239 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.497 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:17:59.497 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.063 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.630 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.887 00:18:00.887 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.887 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.887 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.145 { 00:18:01.145 "auth": { 00:18:01.145 "dhgroup": "null", 00:18:01.145 "digest": "sha256", 00:18:01.145 "state": "completed" 00:18:01.145 }, 00:18:01.145 "cntlid": 7, 00:18:01.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:01.145 "listen_address": { 00:18:01.145 "adrfam": "IPv4", 00:18:01.145 "traddr": "10.0.0.3", 00:18:01.145 "trsvcid": "4420", 00:18:01.145 "trtype": "TCP" 00:18:01.145 }, 00:18:01.145 "peer_address": { 00:18:01.145 "adrfam": "IPv4", 00:18:01.145 "traddr": "10.0.0.1", 00:18:01.145 "trsvcid": "46822", 00:18:01.145 "trtype": "TCP" 00:18:01.145 }, 00:18:01.145 "qid": 0, 00:18:01.145 "state": "enabled", 00:18:01.145 "thread": "nvmf_tgt_poll_group_000" 00:18:01.145 } 00:18:01.145 ]' 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.145 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.710 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:01.710 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:02.277 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.278 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.536 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.794 00:18:02.794 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.794 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.794 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.359 { 00:18:03.359 "auth": { 00:18:03.359 "dhgroup": "ffdhe2048", 00:18:03.359 "digest": "sha256", 00:18:03.359 "state": "completed" 00:18:03.359 }, 00:18:03.359 "cntlid": 9, 00:18:03.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:03.359 "listen_address": { 00:18:03.359 "adrfam": "IPv4", 00:18:03.359 "traddr": "10.0.0.3", 00:18:03.359 "trsvcid": "4420", 00:18:03.359 "trtype": "TCP" 00:18:03.359 }, 00:18:03.359 "peer_address": { 00:18:03.359 "adrfam": "IPv4", 00:18:03.359 "traddr": "10.0.0.1", 00:18:03.359 "trsvcid": "48138", 00:18:03.359 "trtype": "TCP" 00:18:03.359 }, 00:18:03.359 "qid": 0, 00:18:03.359 "state": "enabled", 00:18:03.359 "thread": "nvmf_tgt_poll_group_000" 00:18:03.359 } 00:18:03.359 ]' 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.359 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.617 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:03.617 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:04.562 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.562 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:04.563 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.563 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.563 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.563 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.563 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.563 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.827 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.085 00:18:05.085 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.085 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.085 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.343 { 00:18:05.343 "auth": { 00:18:05.343 "dhgroup": "ffdhe2048", 00:18:05.343 "digest": "sha256", 00:18:05.343 "state": "completed" 00:18:05.343 }, 00:18:05.343 "cntlid": 11, 00:18:05.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:05.343 "listen_address": { 00:18:05.343 "adrfam": "IPv4", 00:18:05.343 "traddr": "10.0.0.3", 00:18:05.343 "trsvcid": "4420", 00:18:05.343 "trtype": "TCP" 00:18:05.343 }, 00:18:05.343 "peer_address": { 00:18:05.343 "adrfam": "IPv4", 00:18:05.343 "traddr": "10.0.0.1", 00:18:05.343 "trsvcid": "48166", 00:18:05.343 "trtype": "TCP" 00:18:05.343 }, 00:18:05.343 "qid": 0, 00:18:05.343 "state": "enabled", 00:18:05.343 "thread": "nvmf_tgt_poll_group_000" 00:18:05.343 } 00:18:05.343 ]' 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.343 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.600 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.600 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.600 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.858 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:05.858 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.422 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.680 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.245 00:18:07.245 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.245 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.245 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.503 { 00:18:07.503 "auth": { 00:18:07.503 "dhgroup": "ffdhe2048", 00:18:07.503 "digest": "sha256", 00:18:07.503 "state": "completed" 00:18:07.503 }, 00:18:07.503 "cntlid": 13, 00:18:07.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:07.503 "listen_address": { 00:18:07.503 "adrfam": "IPv4", 00:18:07.503 "traddr": "10.0.0.3", 00:18:07.503 "trsvcid": "4420", 00:18:07.503 "trtype": "TCP" 00:18:07.503 }, 00:18:07.503 "peer_address": { 00:18:07.503 "adrfam": "IPv4", 00:18:07.503 "traddr": "10.0.0.1", 00:18:07.503 "trsvcid": "48198", 00:18:07.503 "trtype": "TCP" 00:18:07.503 }, 00:18:07.503 "qid": 0, 00:18:07.503 "state": "enabled", 00:18:07.503 "thread": "nvmf_tgt_poll_group_000" 00:18:07.503 } 00:18:07.503 ]' 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.503 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.761 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:07.761 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.694 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.952 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.211 00:18:09.211 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.211 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.211 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.778 { 00:18:09.778 "auth": { 00:18:09.778 "dhgroup": "ffdhe2048", 00:18:09.778 "digest": "sha256", 00:18:09.778 "state": "completed" 00:18:09.778 }, 00:18:09.778 "cntlid": 15, 00:18:09.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:09.778 "listen_address": { 00:18:09.778 "adrfam": "IPv4", 00:18:09.778 "traddr": "10.0.0.3", 00:18:09.778 "trsvcid": "4420", 00:18:09.778 "trtype": "TCP" 00:18:09.778 }, 00:18:09.778 "peer_address": { 00:18:09.778 "adrfam": "IPv4", 00:18:09.778 "traddr": "10.0.0.1", 00:18:09.778 "trsvcid": "48224", 00:18:09.778 "trtype": "TCP" 00:18:09.778 }, 00:18:09.778 "qid": 0, 00:18:09.778 "state": "enabled", 00:18:09.778 "thread": "nvmf_tgt_poll_group_000" 00:18:09.778 } 00:18:09.778 ]' 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.778 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.344 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:10.344 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.912 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.171 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.737 00:18:11.737 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.737 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.737 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.995 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.995 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.995 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.995 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.995 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.995 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.995 { 00:18:11.995 "auth": { 00:18:11.995 "dhgroup": "ffdhe3072", 00:18:11.995 "digest": "sha256", 00:18:11.995 "state": "completed" 00:18:11.995 }, 00:18:11.995 "cntlid": 17, 00:18:11.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:11.995 "listen_address": { 00:18:11.995 "adrfam": "IPv4", 00:18:11.995 "traddr": "10.0.0.3", 00:18:11.995 "trsvcid": "4420", 00:18:11.995 "trtype": "TCP" 00:18:11.995 }, 00:18:11.996 "peer_address": { 00:18:11.996 "adrfam": "IPv4", 00:18:11.996 "traddr": "10.0.0.1", 00:18:11.996 "trsvcid": "48252", 00:18:11.996 "trtype": "TCP" 00:18:11.996 }, 00:18:11.996 "qid": 0, 00:18:11.996 "state": "enabled", 00:18:11.996 "thread": "nvmf_tgt_poll_group_000" 00:18:11.996 } 00:18:11.996 ]' 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.996 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.563 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:12.563 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.144 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.723 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.982 00:18:13.982 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.982 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.982 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.240 { 00:18:14.240 "auth": { 00:18:14.240 "dhgroup": "ffdhe3072", 00:18:14.240 "digest": "sha256", 00:18:14.240 "state": "completed" 00:18:14.240 }, 00:18:14.240 "cntlid": 19, 00:18:14.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:14.240 "listen_address": { 00:18:14.240 "adrfam": "IPv4", 00:18:14.240 "traddr": "10.0.0.3", 00:18:14.240 "trsvcid": "4420", 00:18:14.240 "trtype": "TCP" 00:18:14.240 }, 00:18:14.240 "peer_address": { 00:18:14.240 "adrfam": "IPv4", 00:18:14.240 "traddr": "10.0.0.1", 00:18:14.240 "trsvcid": "56232", 00:18:14.240 "trtype": "TCP" 00:18:14.240 }, 00:18:14.240 "qid": 0, 00:18:14.240 "state": "enabled", 00:18:14.240 "thread": "nvmf_tgt_poll_group_000" 00:18:14.240 } 00:18:14.240 ]' 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.240 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.498 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.498 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.498 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.498 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.498 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.756 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:14.756 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:15.322 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:15.580 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:15.580 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.580 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.580 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.581 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.147 00:18:16.147 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.147 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.147 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.405 { 00:18:16.405 "auth": { 00:18:16.405 "dhgroup": "ffdhe3072", 00:18:16.405 "digest": "sha256", 00:18:16.405 "state": "completed" 00:18:16.405 }, 00:18:16.405 "cntlid": 21, 00:18:16.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:16.405 "listen_address": { 00:18:16.405 "adrfam": "IPv4", 00:18:16.405 "traddr": "10.0.0.3", 00:18:16.405 "trsvcid": "4420", 00:18:16.405 "trtype": "TCP" 00:18:16.405 }, 00:18:16.405 "peer_address": { 00:18:16.405 "adrfam": "IPv4", 00:18:16.405 "traddr": "10.0.0.1", 00:18:16.405 "trsvcid": "56260", 00:18:16.405 "trtype": "TCP" 00:18:16.405 }, 00:18:16.405 "qid": 0, 00:18:16.405 "state": "enabled", 00:18:16.405 "thread": "nvmf_tgt_poll_group_000" 00:18:16.405 } 00:18:16.405 ]' 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.405 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.663 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.663 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.663 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.921 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:16.921 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:17.488 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.488 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:17.488 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.488 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.488 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.488 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.488 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.488 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.055 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.313 00:18:18.313 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.313 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.313 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.571 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.571 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.571 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.572 { 00:18:18.572 "auth": { 00:18:18.572 "dhgroup": "ffdhe3072", 00:18:18.572 "digest": "sha256", 00:18:18.572 "state": "completed" 00:18:18.572 }, 00:18:18.572 "cntlid": 23, 00:18:18.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:18.572 "listen_address": { 00:18:18.572 "adrfam": "IPv4", 00:18:18.572 "traddr": "10.0.0.3", 00:18:18.572 "trsvcid": "4420", 00:18:18.572 "trtype": "TCP" 00:18:18.572 }, 00:18:18.572 "peer_address": { 00:18:18.572 "adrfam": "IPv4", 00:18:18.572 "traddr": "10.0.0.1", 00:18:18.572 "trsvcid": "56282", 00:18:18.572 "trtype": "TCP" 00:18:18.572 }, 00:18:18.572 "qid": 0, 00:18:18.572 "state": "enabled", 00:18:18.572 "thread": "nvmf_tgt_poll_group_000" 00:18:18.572 } 00:18:18.572 ]' 00:18:18.572 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.572 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.572 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.829 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.829 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.829 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.829 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.829 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.086 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:19.086 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:20.018 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.276 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.534 00:18:20.534 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.534 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.534 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.100 { 00:18:21.100 "auth": { 00:18:21.100 "dhgroup": "ffdhe4096", 00:18:21.100 "digest": "sha256", 00:18:21.100 "state": "completed" 00:18:21.100 }, 00:18:21.100 "cntlid": 25, 00:18:21.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:21.100 "listen_address": { 00:18:21.100 "adrfam": "IPv4", 00:18:21.100 "traddr": "10.0.0.3", 00:18:21.100 "trsvcid": "4420", 00:18:21.100 "trtype": "TCP" 00:18:21.100 }, 00:18:21.100 "peer_address": { 00:18:21.100 "adrfam": "IPv4", 00:18:21.100 "traddr": "10.0.0.1", 00:18:21.100 "trsvcid": "56316", 00:18:21.100 "trtype": "TCP" 00:18:21.100 }, 00:18:21.100 "qid": 0, 00:18:21.100 "state": "enabled", 00:18:21.100 "thread": "nvmf_tgt_poll_group_000" 00:18:21.100 } 00:18:21.100 ]' 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.100 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.666 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:21.666 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:22.233 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.490 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.057 00:18:23.057 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.057 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.057 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.316 { 00:18:23.316 "auth": { 00:18:23.316 "dhgroup": "ffdhe4096", 00:18:23.316 "digest": "sha256", 00:18:23.316 "state": "completed" 00:18:23.316 }, 00:18:23.316 "cntlid": 27, 00:18:23.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:23.316 "listen_address": { 00:18:23.316 "adrfam": "IPv4", 00:18:23.316 "traddr": "10.0.0.3", 00:18:23.316 "trsvcid": "4420", 00:18:23.316 "trtype": "TCP" 00:18:23.316 }, 00:18:23.316 "peer_address": { 00:18:23.316 "adrfam": "IPv4", 00:18:23.316 "traddr": "10.0.0.1", 00:18:23.316 "trsvcid": "50228", 00:18:23.316 "trtype": "TCP" 00:18:23.316 }, 00:18:23.316 "qid": 0, 00:18:23.316 "state": "enabled", 00:18:23.316 "thread": "nvmf_tgt_poll_group_000" 00:18:23.316 } 00:18:23.316 ]' 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.316 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.317 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.317 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.317 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.317 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.884 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:23.884 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.452 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.711 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.970 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.970 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.970 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.970 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.229 00:18:25.229 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.229 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.229 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.488 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.488 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.488 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.488 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.488 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.488 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.488 { 00:18:25.488 "auth": { 00:18:25.488 "dhgroup": "ffdhe4096", 00:18:25.488 "digest": "sha256", 00:18:25.488 "state": "completed" 00:18:25.488 }, 00:18:25.488 "cntlid": 29, 00:18:25.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:25.488 "listen_address": { 00:18:25.488 "adrfam": "IPv4", 00:18:25.488 "traddr": "10.0.0.3", 00:18:25.488 "trsvcid": "4420", 00:18:25.488 "trtype": "TCP" 00:18:25.488 }, 00:18:25.488 "peer_address": { 00:18:25.488 "adrfam": "IPv4", 00:18:25.488 "traddr": "10.0.0.1", 00:18:25.488 "trsvcid": "50256", 00:18:25.488 "trtype": "TCP" 00:18:25.488 }, 00:18:25.488 "qid": 0, 00:18:25.488 "state": "enabled", 00:18:25.488 "thread": "nvmf_tgt_poll_group_000" 00:18:25.488 } 00:18:25.488 ]' 00:18:25.488 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.747 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.747 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.747 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.747 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.747 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.747 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.747 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.006 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:26.006 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:26.574 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.833 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:26.833 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.834 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.834 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.834 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.834 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.834 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.092 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.351 00:18:27.351 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.351 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.351 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.918 { 00:18:27.918 "auth": { 00:18:27.918 "dhgroup": "ffdhe4096", 00:18:27.918 "digest": "sha256", 00:18:27.918 "state": "completed" 00:18:27.918 }, 00:18:27.918 "cntlid": 31, 00:18:27.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:27.918 "listen_address": { 00:18:27.918 "adrfam": "IPv4", 00:18:27.918 "traddr": "10.0.0.3", 00:18:27.918 "trsvcid": "4420", 00:18:27.918 "trtype": "TCP" 00:18:27.918 }, 00:18:27.918 "peer_address": { 00:18:27.918 "adrfam": "IPv4", 00:18:27.918 "traddr": "10.0.0.1", 00:18:27.918 "trsvcid": "50282", 00:18:27.918 "trtype": "TCP" 00:18:27.918 }, 00:18:27.918 "qid": 0, 00:18:27.918 "state": "enabled", 00:18:27.918 "thread": "nvmf_tgt_poll_group_000" 00:18:27.918 } 00:18:27.918 ]' 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.918 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.486 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:28.486 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.054 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.332 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.939 00:18:29.939 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.939 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.939 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.254 { 00:18:30.254 "auth": { 00:18:30.254 "dhgroup": "ffdhe6144", 00:18:30.254 "digest": "sha256", 00:18:30.254 "state": "completed" 00:18:30.254 }, 00:18:30.254 "cntlid": 33, 00:18:30.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:30.254 "listen_address": { 00:18:30.254 "adrfam": "IPv4", 00:18:30.254 "traddr": "10.0.0.3", 00:18:30.254 "trsvcid": "4420", 00:18:30.254 "trtype": "TCP" 00:18:30.254 }, 00:18:30.254 "peer_address": { 00:18:30.254 "adrfam": "IPv4", 00:18:30.254 "traddr": "10.0.0.1", 00:18:30.254 "trsvcid": "50318", 00:18:30.254 "trtype": "TCP" 00:18:30.254 }, 00:18:30.254 "qid": 0, 00:18:30.254 "state": "enabled", 00:18:30.254 "thread": "nvmf_tgt_poll_group_000" 00:18:30.254 } 00:18:30.254 ]' 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.254 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.822 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:30.823 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.388 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.646 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.212 00:18:32.212 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.212 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.212 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.471 { 00:18:32.471 "auth": { 00:18:32.471 "dhgroup": "ffdhe6144", 00:18:32.471 "digest": "sha256", 00:18:32.471 "state": "completed" 00:18:32.471 }, 00:18:32.471 "cntlid": 35, 00:18:32.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:32.471 "listen_address": { 00:18:32.471 "adrfam": "IPv4", 00:18:32.471 "traddr": "10.0.0.3", 00:18:32.471 "trsvcid": "4420", 00:18:32.471 "trtype": "TCP" 00:18:32.471 }, 00:18:32.471 "peer_address": { 00:18:32.471 "adrfam": "IPv4", 00:18:32.471 "traddr": "10.0.0.1", 00:18:32.471 "trsvcid": "50350", 00:18:32.471 "trtype": "TCP" 00:18:32.471 }, 00:18:32.471 "qid": 0, 00:18:32.471 "state": "enabled", 00:18:32.471 "thread": "nvmf_tgt_poll_group_000" 00:18:32.471 } 00:18:32.471 ]' 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.471 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.730 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.730 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.730 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.989 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:32.989 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.556 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.814 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.381 00:18:34.381 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.381 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.381 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.639 { 00:18:34.639 "auth": { 00:18:34.639 "dhgroup": "ffdhe6144", 00:18:34.639 "digest": "sha256", 00:18:34.639 "state": "completed" 00:18:34.639 }, 00:18:34.639 "cntlid": 37, 00:18:34.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:34.639 "listen_address": { 00:18:34.639 "adrfam": "IPv4", 00:18:34.639 "traddr": "10.0.0.3", 00:18:34.639 "trsvcid": "4420", 00:18:34.639 "trtype": "TCP" 00:18:34.639 }, 00:18:34.639 "peer_address": { 00:18:34.639 "adrfam": "IPv4", 00:18:34.639 "traddr": "10.0.0.1", 00:18:34.639 "trsvcid": "40316", 00:18:34.639 "trtype": "TCP" 00:18:34.639 }, 00:18:34.639 "qid": 0, 00:18:34.639 "state": "enabled", 00:18:34.639 "thread": "nvmf_tgt_poll_group_000" 00:18:34.639 } 00:18:34.639 ]' 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.639 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.897 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.897 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.897 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.897 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.897 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.156 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:35.156 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.722 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.290 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.546 00:18:36.546 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.546 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.547 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.804 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.804 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.804 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.804 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.804 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.804 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.804 { 00:18:36.804 "auth": { 00:18:36.804 "dhgroup": "ffdhe6144", 00:18:36.804 "digest": "sha256", 00:18:36.804 "state": "completed" 00:18:36.804 }, 00:18:36.804 "cntlid": 39, 00:18:36.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:36.804 "listen_address": { 00:18:36.804 "adrfam": "IPv4", 00:18:36.804 "traddr": "10.0.0.3", 00:18:36.804 "trsvcid": "4420", 00:18:36.804 "trtype": "TCP" 00:18:36.804 }, 00:18:36.804 "peer_address": { 00:18:36.804 "adrfam": "IPv4", 00:18:36.804 "traddr": "10.0.0.1", 00:18:36.804 "trsvcid": "40354", 00:18:36.804 "trtype": "TCP" 00:18:36.804 }, 00:18:36.804 "qid": 0, 00:18:36.804 "state": "enabled", 00:18:36.804 "thread": "nvmf_tgt_poll_group_000" 00:18:36.804 } 00:18:36.804 ]' 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.062 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.320 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:37.320 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.338 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.272 00:18:39.272 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.272 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.272 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.529 { 00:18:39.529 "auth": { 00:18:39.529 "dhgroup": "ffdhe8192", 00:18:39.529 "digest": "sha256", 00:18:39.529 "state": "completed" 00:18:39.529 }, 00:18:39.529 "cntlid": 41, 00:18:39.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:39.529 "listen_address": { 00:18:39.529 "adrfam": "IPv4", 00:18:39.529 "traddr": "10.0.0.3", 00:18:39.529 "trsvcid": "4420", 00:18:39.529 "trtype": "TCP" 00:18:39.529 }, 00:18:39.529 "peer_address": { 00:18:39.529 "adrfam": "IPv4", 00:18:39.529 "traddr": "10.0.0.1", 00:18:39.529 "trsvcid": "40376", 00:18:39.529 "trtype": "TCP" 00:18:39.529 }, 00:18:39.529 "qid": 0, 00:18:39.529 "state": "enabled", 00:18:39.529 "thread": "nvmf_tgt_poll_group_000" 00:18:39.529 } 00:18:39.529 ]' 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.529 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.529 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.529 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.786 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.786 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.786 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.043 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:40.043 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.609 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.868 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.804 00:18:41.804 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.804 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.804 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.063 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.063 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.063 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.063 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.063 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.063 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.063 { 00:18:42.063 "auth": { 00:18:42.063 "dhgroup": "ffdhe8192", 00:18:42.063 "digest": "sha256", 00:18:42.063 "state": "completed" 00:18:42.063 }, 00:18:42.063 "cntlid": 43, 00:18:42.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:42.063 "listen_address": { 00:18:42.064 "adrfam": "IPv4", 00:18:42.064 "traddr": "10.0.0.3", 00:18:42.064 "trsvcid": "4420", 00:18:42.064 "trtype": "TCP" 00:18:42.064 }, 00:18:42.064 "peer_address": { 00:18:42.064 "adrfam": "IPv4", 00:18:42.064 "traddr": "10.0.0.1", 00:18:42.064 "trsvcid": "40400", 00:18:42.064 "trtype": "TCP" 00:18:42.064 }, 00:18:42.064 "qid": 0, 00:18:42.064 "state": "enabled", 00:18:42.064 "thread": "nvmf_tgt_poll_group_000" 00:18:42.064 } 00:18:42.064 ]' 00:18:42.064 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.064 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.064 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.064 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.064 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.323 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.323 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.323 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.582 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:42.582 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.148 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.406 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.973 00:18:44.231 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.231 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.231 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.489 { 00:18:44.489 "auth": { 00:18:44.489 "dhgroup": "ffdhe8192", 00:18:44.489 "digest": "sha256", 00:18:44.489 "state": "completed" 00:18:44.489 }, 00:18:44.489 "cntlid": 45, 00:18:44.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:44.489 "listen_address": { 00:18:44.489 "adrfam": "IPv4", 00:18:44.489 "traddr": "10.0.0.3", 00:18:44.489 "trsvcid": "4420", 00:18:44.489 "trtype": "TCP" 00:18:44.489 }, 00:18:44.489 "peer_address": { 00:18:44.489 "adrfam": "IPv4", 00:18:44.489 "traddr": "10.0.0.1", 00:18:44.489 "trsvcid": "60752", 00:18:44.489 "trtype": "TCP" 00:18:44.489 }, 00:18:44.489 "qid": 0, 00:18:44.489 "state": "enabled", 00:18:44.489 "thread": "nvmf_tgt_poll_group_000" 00:18:44.489 } 00:18:44.489 ]' 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.489 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.748 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:44.748 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:45.683 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.683 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:45.683 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.684 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.684 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.684 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.684 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.684 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.940 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.941 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.539 00:18:46.539 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.539 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.539 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.796 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.796 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.796 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.796 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.796 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.796 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.796 { 00:18:46.796 "auth": { 00:18:46.796 "dhgroup": "ffdhe8192", 00:18:46.796 "digest": "sha256", 00:18:46.796 "state": "completed" 00:18:46.796 }, 00:18:46.796 "cntlid": 47, 00:18:46.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:46.796 "listen_address": { 00:18:46.796 "adrfam": "IPv4", 00:18:46.796 "traddr": "10.0.0.3", 00:18:46.796 "trsvcid": "4420", 00:18:46.796 "trtype": "TCP" 00:18:46.796 }, 00:18:46.796 "peer_address": { 00:18:46.796 "adrfam": "IPv4", 00:18:46.796 "traddr": "10.0.0.1", 00:18:46.796 "trsvcid": "60780", 00:18:46.796 "trtype": "TCP" 00:18:46.796 }, 00:18:46.796 "qid": 0, 00:18:46.796 "state": "enabled", 00:18:46.796 "thread": "nvmf_tgt_poll_group_000" 00:18:46.796 } 00:18:46.796 ]' 00:18:47.053 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.053 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.053 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.053 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.053 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.054 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.054 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.054 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.314 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:47.314 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.250 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.251 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.251 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.816 00:18:48.816 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.816 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.816 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.073 { 00:18:49.073 "auth": { 00:18:49.073 "dhgroup": "null", 00:18:49.073 "digest": "sha384", 00:18:49.073 "state": "completed" 00:18:49.073 }, 00:18:49.073 "cntlid": 49, 00:18:49.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:49.073 "listen_address": { 00:18:49.073 "adrfam": "IPv4", 00:18:49.073 "traddr": "10.0.0.3", 00:18:49.073 "trsvcid": "4420", 00:18:49.073 "trtype": "TCP" 00:18:49.073 }, 00:18:49.073 "peer_address": { 00:18:49.073 "adrfam": "IPv4", 00:18:49.073 "traddr": "10.0.0.1", 00:18:49.073 "trsvcid": "60814", 00:18:49.073 "trtype": "TCP" 00:18:49.073 }, 00:18:49.073 "qid": 0, 00:18:49.073 "state": "enabled", 00:18:49.073 "thread": "nvmf_tgt_poll_group_000" 00:18:49.073 } 00:18:49.073 ]' 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.073 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.639 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:49.639 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.205 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.463 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.721 00:18:50.978 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.978 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.978 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.236 { 00:18:51.236 "auth": { 00:18:51.236 "dhgroup": "null", 00:18:51.236 "digest": "sha384", 00:18:51.236 "state": "completed" 00:18:51.236 }, 00:18:51.236 "cntlid": 51, 00:18:51.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:51.236 "listen_address": { 00:18:51.236 "adrfam": "IPv4", 00:18:51.236 "traddr": "10.0.0.3", 00:18:51.236 "trsvcid": "4420", 00:18:51.236 "trtype": "TCP" 00:18:51.236 }, 00:18:51.236 "peer_address": { 00:18:51.236 "adrfam": "IPv4", 00:18:51.236 "traddr": "10.0.0.1", 00:18:51.236 "trsvcid": "60836", 00:18:51.236 "trtype": "TCP" 00:18:51.236 }, 00:18:51.236 "qid": 0, 00:18:51.236 "state": "enabled", 00:18:51.236 "thread": "nvmf_tgt_poll_group_000" 00:18:51.236 } 00:18:51.236 ]' 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.236 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.801 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:51.801 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.365 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.622 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:52.622 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.623 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.623 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.623 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.623 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.623 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.623 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.623 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.623 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.623 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.623 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.623 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.880 00:18:52.880 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.880 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.880 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.444 { 00:18:53.444 "auth": { 00:18:53.444 "dhgroup": "null", 00:18:53.444 "digest": "sha384", 00:18:53.444 "state": "completed" 00:18:53.444 }, 00:18:53.444 "cntlid": 53, 00:18:53.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:53.444 "listen_address": { 00:18:53.444 "adrfam": "IPv4", 00:18:53.444 "traddr": "10.0.0.3", 00:18:53.444 "trsvcid": "4420", 00:18:53.444 "trtype": "TCP" 00:18:53.444 }, 00:18:53.444 "peer_address": { 00:18:53.444 "adrfam": "IPv4", 00:18:53.444 "traddr": "10.0.0.1", 00:18:53.444 "trsvcid": "51096", 00:18:53.444 "trtype": "TCP" 00:18:53.444 }, 00:18:53.444 "qid": 0, 00:18:53.444 "state": "enabled", 00:18:53.444 "thread": "nvmf_tgt_poll_group_000" 00:18:53.444 } 00:18:53.444 ]' 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.444 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.702 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:53.702 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:18:54.348 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.619 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:54.619 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.619 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.619 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.619 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.619 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.619 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.876 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.134 00:18:55.134 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.134 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.134 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.392 { 00:18:55.392 "auth": { 00:18:55.392 "dhgroup": "null", 00:18:55.392 "digest": "sha384", 00:18:55.392 "state": "completed" 00:18:55.392 }, 00:18:55.392 "cntlid": 55, 00:18:55.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:55.392 "listen_address": { 00:18:55.392 "adrfam": "IPv4", 00:18:55.392 "traddr": "10.0.0.3", 00:18:55.392 "trsvcid": "4420", 00:18:55.392 "trtype": "TCP" 00:18:55.392 }, 00:18:55.392 "peer_address": { 00:18:55.392 "adrfam": "IPv4", 00:18:55.392 "traddr": "10.0.0.1", 00:18:55.392 "trsvcid": "51122", 00:18:55.392 "trtype": "TCP" 00:18:55.392 }, 00:18:55.392 "qid": 0, 00:18:55.392 "state": "enabled", 00:18:55.392 "thread": "nvmf_tgt_poll_group_000" 00:18:55.392 } 00:18:55.392 ]' 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.392 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.649 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:55.649 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.649 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.649 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.649 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.908 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:55.908 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.842 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.100 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.358 00:18:57.358 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.358 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.358 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.616 { 00:18:57.616 "auth": { 00:18:57.616 "dhgroup": "ffdhe2048", 00:18:57.616 "digest": "sha384", 00:18:57.616 "state": "completed" 00:18:57.616 }, 00:18:57.616 "cntlid": 57, 00:18:57.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:57.616 "listen_address": { 00:18:57.616 "adrfam": "IPv4", 00:18:57.616 "traddr": "10.0.0.3", 00:18:57.616 "trsvcid": "4420", 00:18:57.616 "trtype": "TCP" 00:18:57.616 }, 00:18:57.616 "peer_address": { 00:18:57.616 "adrfam": "IPv4", 00:18:57.616 "traddr": "10.0.0.1", 00:18:57.616 "trsvcid": "51152", 00:18:57.616 "trtype": "TCP" 00:18:57.616 }, 00:18:57.616 "qid": 0, 00:18:57.616 "state": "enabled", 00:18:57.616 "thread": "nvmf_tgt_poll_group_000" 00:18:57.616 } 00:18:57.616 ]' 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.616 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.874 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.874 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.874 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.874 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.874 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.132 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:58.132 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.704 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.270 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.528 00:18:59.528 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.528 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.528 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.785 { 00:18:59.785 "auth": { 00:18:59.785 "dhgroup": "ffdhe2048", 00:18:59.785 "digest": "sha384", 00:18:59.785 "state": "completed" 00:18:59.785 }, 00:18:59.785 "cntlid": 59, 00:18:59.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:18:59.785 "listen_address": { 00:18:59.785 "adrfam": "IPv4", 00:18:59.785 "traddr": "10.0.0.3", 00:18:59.785 "trsvcid": "4420", 00:18:59.785 "trtype": "TCP" 00:18:59.785 }, 00:18:59.785 "peer_address": { 00:18:59.785 "adrfam": "IPv4", 00:18:59.785 "traddr": "10.0.0.1", 00:18:59.785 "trsvcid": "51180", 00:18:59.785 "trtype": "TCP" 00:18:59.785 }, 00:18:59.785 "qid": 0, 00:18:59.785 "state": "enabled", 00:18:59.785 "thread": "nvmf_tgt_poll_group_000" 00:18:59.785 } 00:18:59.785 ]' 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.785 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.043 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.043 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.043 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.301 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:00.301 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.235 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.493 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.751 00:19:01.751 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.751 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.751 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.009 { 00:19:02.009 "auth": { 00:19:02.009 "dhgroup": "ffdhe2048", 00:19:02.009 "digest": "sha384", 00:19:02.009 "state": "completed" 00:19:02.009 }, 00:19:02.009 "cntlid": 61, 00:19:02.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:02.009 "listen_address": { 00:19:02.009 "adrfam": "IPv4", 00:19:02.009 "traddr": "10.0.0.3", 00:19:02.009 "trsvcid": "4420", 00:19:02.009 "trtype": "TCP" 00:19:02.009 }, 00:19:02.009 "peer_address": { 00:19:02.009 "adrfam": "IPv4", 00:19:02.009 "traddr": "10.0.0.1", 00:19:02.009 "trsvcid": "51200", 00:19:02.009 "trtype": "TCP" 00:19:02.009 }, 00:19:02.009 "qid": 0, 00:19:02.009 "state": "enabled", 00:19:02.009 "thread": "nvmf_tgt_poll_group_000" 00:19:02.009 } 00:19:02.009 ]' 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.009 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.267 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.267 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.267 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.267 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.267 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.526 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:02.526 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.461 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.719 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.720 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.720 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.720 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.720 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.720 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.978 00:19:03.978 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.978 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.978 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.237 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.237 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.237 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.237 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.237 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.237 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.237 { 00:19:04.237 "auth": { 00:19:04.237 "dhgroup": "ffdhe2048", 00:19:04.237 "digest": "sha384", 00:19:04.237 "state": "completed" 00:19:04.237 }, 00:19:04.237 "cntlid": 63, 00:19:04.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:04.237 "listen_address": { 00:19:04.237 "adrfam": "IPv4", 00:19:04.237 "traddr": "10.0.0.3", 00:19:04.237 "trsvcid": "4420", 00:19:04.237 "trtype": "TCP" 00:19:04.237 }, 00:19:04.237 "peer_address": { 00:19:04.237 "adrfam": "IPv4", 00:19:04.237 "traddr": "10.0.0.1", 00:19:04.237 "trsvcid": "43064", 00:19:04.237 "trtype": "TCP" 00:19:04.237 }, 00:19:04.237 "qid": 0, 00:19:04.237 "state": "enabled", 00:19:04.237 "thread": "nvmf_tgt_poll_group_000" 00:19:04.237 } 00:19:04.237 ]' 00:19:04.237 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.496 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.496 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.496 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.496 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.496 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.496 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.496 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.754 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:04.754 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.321 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.886 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.143 00:19:06.143 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.143 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.143 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.402 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.402 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.402 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.402 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.402 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.402 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.402 { 00:19:06.402 "auth": { 00:19:06.402 "dhgroup": "ffdhe3072", 00:19:06.402 "digest": "sha384", 00:19:06.402 "state": "completed" 00:19:06.402 }, 00:19:06.402 "cntlid": 65, 00:19:06.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:06.402 "listen_address": { 00:19:06.402 "adrfam": "IPv4", 00:19:06.402 "traddr": "10.0.0.3", 00:19:06.402 "trsvcid": "4420", 00:19:06.402 "trtype": "TCP" 00:19:06.402 }, 00:19:06.402 "peer_address": { 00:19:06.403 "adrfam": "IPv4", 00:19:06.403 "traddr": "10.0.0.1", 00:19:06.403 "trsvcid": "43098", 00:19:06.403 "trtype": "TCP" 00:19:06.403 }, 00:19:06.403 "qid": 0, 00:19:06.403 "state": "enabled", 00:19:06.403 "thread": "nvmf_tgt_poll_group_000" 00:19:06.403 } 00:19:06.403 ]' 00:19:06.403 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.660 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.660 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.660 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.660 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.660 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.660 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.660 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.918 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:06.918 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.853 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.111 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.369 00:19:08.369 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.370 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.370 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.627 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.627 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.627 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.627 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.627 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.627 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.627 { 00:19:08.627 "auth": { 00:19:08.627 "dhgroup": "ffdhe3072", 00:19:08.627 "digest": "sha384", 00:19:08.627 "state": "completed" 00:19:08.627 }, 00:19:08.627 "cntlid": 67, 00:19:08.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:08.627 "listen_address": { 00:19:08.627 "adrfam": "IPv4", 00:19:08.627 "traddr": "10.0.0.3", 00:19:08.627 "trsvcid": "4420", 00:19:08.627 "trtype": "TCP" 00:19:08.627 }, 00:19:08.627 "peer_address": { 00:19:08.627 "adrfam": "IPv4", 00:19:08.627 "traddr": "10.0.0.1", 00:19:08.627 "trsvcid": "43118", 00:19:08.627 "trtype": "TCP" 00:19:08.627 }, 00:19:08.627 "qid": 0, 00:19:08.627 "state": "enabled", 00:19:08.627 "thread": "nvmf_tgt_poll_group_000" 00:19:08.627 } 00:19:08.627 ]' 00:19:08.627 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.884 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.884 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.884 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.884 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.884 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.884 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.884 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.141 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:09.141 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:09.760 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.760 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:09.760 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.760 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.017 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.018 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.018 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:10.018 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.277 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.844 00:19:10.844 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.844 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.844 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.103 { 00:19:11.103 "auth": { 00:19:11.103 "dhgroup": "ffdhe3072", 00:19:11.103 "digest": "sha384", 00:19:11.103 "state": "completed" 00:19:11.103 }, 00:19:11.103 "cntlid": 69, 00:19:11.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:11.103 "listen_address": { 00:19:11.103 "adrfam": "IPv4", 00:19:11.103 "traddr": "10.0.0.3", 00:19:11.103 "trsvcid": "4420", 00:19:11.103 "trtype": "TCP" 00:19:11.103 }, 00:19:11.103 "peer_address": { 00:19:11.103 "adrfam": "IPv4", 00:19:11.103 "traddr": "10.0.0.1", 00:19:11.103 "trsvcid": "43146", 00:19:11.103 "trtype": "TCP" 00:19:11.103 }, 00:19:11.103 "qid": 0, 00:19:11.103 "state": "enabled", 00:19:11.103 "thread": "nvmf_tgt_poll_group_000" 00:19:11.103 } 00:19:11.103 ]' 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.103 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.669 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:11.669 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.235 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.494 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.753 00:19:12.753 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.753 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.753 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.318 { 00:19:13.318 "auth": { 00:19:13.318 "dhgroup": "ffdhe3072", 00:19:13.318 "digest": "sha384", 00:19:13.318 "state": "completed" 00:19:13.318 }, 00:19:13.318 "cntlid": 71, 00:19:13.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:13.318 "listen_address": { 00:19:13.318 "adrfam": "IPv4", 00:19:13.318 "traddr": "10.0.0.3", 00:19:13.318 "trsvcid": "4420", 00:19:13.318 "trtype": "TCP" 00:19:13.318 }, 00:19:13.318 "peer_address": { 00:19:13.318 "adrfam": "IPv4", 00:19:13.318 "traddr": "10.0.0.1", 00:19:13.318 "trsvcid": "37732", 00:19:13.318 "trtype": "TCP" 00:19:13.318 }, 00:19:13.318 "qid": 0, 00:19:13.318 "state": "enabled", 00:19:13.318 "thread": "nvmf_tgt_poll_group_000" 00:19:13.318 } 00:19:13.318 ]' 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.318 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.577 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:13.577 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:14.512 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.772 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.031 00:19:15.031 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.031 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.031 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.290 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.290 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.290 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.290 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.548 { 00:19:15.548 "auth": { 00:19:15.548 "dhgroup": "ffdhe4096", 00:19:15.548 "digest": "sha384", 00:19:15.548 "state": "completed" 00:19:15.548 }, 00:19:15.548 "cntlid": 73, 00:19:15.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:15.548 "listen_address": { 00:19:15.548 "adrfam": "IPv4", 00:19:15.548 "traddr": "10.0.0.3", 00:19:15.548 "trsvcid": "4420", 00:19:15.548 "trtype": "TCP" 00:19:15.548 }, 00:19:15.548 "peer_address": { 00:19:15.548 "adrfam": "IPv4", 00:19:15.548 "traddr": "10.0.0.1", 00:19:15.548 "trsvcid": "37750", 00:19:15.548 "trtype": "TCP" 00:19:15.548 }, 00:19:15.548 "qid": 0, 00:19:15.548 "state": "enabled", 00:19:15.548 "thread": "nvmf_tgt_poll_group_000" 00:19:15.548 } 00:19:15.548 ]' 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.548 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.807 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:15.807 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:16.757 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.022 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.281 00:19:17.281 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.281 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.281 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.848 { 00:19:17.848 "auth": { 00:19:17.848 "dhgroup": "ffdhe4096", 00:19:17.848 "digest": "sha384", 00:19:17.848 "state": "completed" 00:19:17.848 }, 00:19:17.848 "cntlid": 75, 00:19:17.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:17.848 "listen_address": { 00:19:17.848 "adrfam": "IPv4", 00:19:17.848 "traddr": "10.0.0.3", 00:19:17.848 "trsvcid": "4420", 00:19:17.848 "trtype": "TCP" 00:19:17.848 }, 00:19:17.848 "peer_address": { 00:19:17.848 "adrfam": "IPv4", 00:19:17.848 "traddr": "10.0.0.1", 00:19:17.848 "trsvcid": "37778", 00:19:17.848 "trtype": "TCP" 00:19:17.848 }, 00:19:17.848 "qid": 0, 00:19:17.848 "state": "enabled", 00:19:17.848 "thread": "nvmf_tgt_poll_group_000" 00:19:17.848 } 00:19:17.848 ]' 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.848 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.107 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:18.107 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:19.054 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.055 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:19.055 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.055 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.055 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.055 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.055 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.055 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.314 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.573 00:19:19.573 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.573 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.573 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.832 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.832 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.832 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.832 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.832 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.832 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.832 { 00:19:19.832 "auth": { 00:19:19.832 "dhgroup": "ffdhe4096", 00:19:19.832 "digest": "sha384", 00:19:19.832 "state": "completed" 00:19:19.832 }, 00:19:19.832 "cntlid": 77, 00:19:19.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:19.832 "listen_address": { 00:19:19.832 "adrfam": "IPv4", 00:19:19.832 "traddr": "10.0.0.3", 00:19:19.832 "trsvcid": "4420", 00:19:19.832 "trtype": "TCP" 00:19:19.832 }, 00:19:19.832 "peer_address": { 00:19:19.832 "adrfam": "IPv4", 00:19:19.832 "traddr": "10.0.0.1", 00:19:19.832 "trsvcid": "37802", 00:19:19.832 "trtype": "TCP" 00:19:19.832 }, 00:19:19.832 "qid": 0, 00:19:19.832 "state": "enabled", 00:19:19.832 "thread": "nvmf_tgt_poll_group_000" 00:19:19.832 } 00:19:19.832 ]' 00:19:19.832 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.091 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.091 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.091 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.091 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.091 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.091 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.091 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.349 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:20.349 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.283 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.541 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.107 00:19:22.107 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.107 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.107 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.365 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.365 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.365 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.365 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.365 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.365 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.365 { 00:19:22.365 "auth": { 00:19:22.365 "dhgroup": "ffdhe4096", 00:19:22.365 "digest": "sha384", 00:19:22.365 "state": "completed" 00:19:22.365 }, 00:19:22.365 "cntlid": 79, 00:19:22.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:22.365 "listen_address": { 00:19:22.365 "adrfam": "IPv4", 00:19:22.365 "traddr": "10.0.0.3", 00:19:22.365 "trsvcid": "4420", 00:19:22.366 "trtype": "TCP" 00:19:22.366 }, 00:19:22.366 "peer_address": { 00:19:22.366 "adrfam": "IPv4", 00:19:22.366 "traddr": "10.0.0.1", 00:19:22.366 "trsvcid": "37830", 00:19:22.366 "trtype": "TCP" 00:19:22.366 }, 00:19:22.366 "qid": 0, 00:19:22.366 "state": "enabled", 00:19:22.366 "thread": "nvmf_tgt_poll_group_000" 00:19:22.366 } 00:19:22.366 ]' 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.366 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.624 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:22.624 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:23.555 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.812 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.813 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.813 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.813 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.813 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.474 00:19:24.474 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.474 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.474 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.731 { 00:19:24.731 "auth": { 00:19:24.731 "dhgroup": "ffdhe6144", 00:19:24.731 "digest": "sha384", 00:19:24.731 "state": "completed" 00:19:24.731 }, 00:19:24.731 "cntlid": 81, 00:19:24.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:24.731 "listen_address": { 00:19:24.731 "adrfam": "IPv4", 00:19:24.731 "traddr": "10.0.0.3", 00:19:24.731 "trsvcid": "4420", 00:19:24.731 "trtype": "TCP" 00:19:24.731 }, 00:19:24.731 "peer_address": { 00:19:24.731 "adrfam": "IPv4", 00:19:24.731 "traddr": "10.0.0.1", 00:19:24.731 "trsvcid": "33234", 00:19:24.731 "trtype": "TCP" 00:19:24.731 }, 00:19:24.731 "qid": 0, 00:19:24.731 "state": "enabled", 00:19:24.731 "thread": "nvmf_tgt_poll_group_000" 00:19:24.731 } 00:19:24.731 ]' 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.731 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.989 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:24.989 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:25.922 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.180 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.747 00:19:26.747 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.747 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.747 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.005 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.006 { 00:19:27.006 "auth": { 00:19:27.006 "dhgroup": "ffdhe6144", 00:19:27.006 "digest": "sha384", 00:19:27.006 "state": "completed" 00:19:27.006 }, 00:19:27.006 "cntlid": 83, 00:19:27.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:27.006 "listen_address": { 00:19:27.006 "adrfam": "IPv4", 00:19:27.006 "traddr": "10.0.0.3", 00:19:27.006 "trsvcid": "4420", 00:19:27.006 "trtype": "TCP" 00:19:27.006 }, 00:19:27.006 "peer_address": { 00:19:27.006 "adrfam": "IPv4", 00:19:27.006 "traddr": "10.0.0.1", 00:19:27.006 "trsvcid": "33268", 00:19:27.006 "trtype": "TCP" 00:19:27.006 }, 00:19:27.006 "qid": 0, 00:19:27.006 "state": "enabled", 00:19:27.006 "thread": "nvmf_tgt_poll_group_000" 00:19:27.006 } 00:19:27.006 ]' 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.006 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.264 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:27.264 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.200 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.765 00:19:28.765 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.765 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.765 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.023 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.023 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.023 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.023 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.023 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.023 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.023 { 00:19:29.023 "auth": { 00:19:29.023 "dhgroup": "ffdhe6144", 00:19:29.023 "digest": "sha384", 00:19:29.023 "state": "completed" 00:19:29.023 }, 00:19:29.023 "cntlid": 85, 00:19:29.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:29.023 "listen_address": { 00:19:29.023 "adrfam": "IPv4", 00:19:29.023 "traddr": "10.0.0.3", 00:19:29.023 "trsvcid": "4420", 00:19:29.023 "trtype": "TCP" 00:19:29.023 }, 00:19:29.023 "peer_address": { 00:19:29.023 "adrfam": "IPv4", 00:19:29.023 "traddr": "10.0.0.1", 00:19:29.023 "trsvcid": "33288", 00:19:29.023 "trtype": "TCP" 00:19:29.023 }, 00:19:29.023 "qid": 0, 00:19:29.023 "state": "enabled", 00:19:29.023 "thread": "nvmf_tgt_poll_group_000" 00:19:29.023 } 00:19:29.023 ]' 00:19:29.023 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.281 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.282 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.282 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.282 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.282 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.282 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.282 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.541 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:29.541 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.108 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.675 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.933 00:19:31.191 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.191 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.191 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.449 { 00:19:31.449 "auth": { 00:19:31.449 "dhgroup": "ffdhe6144", 00:19:31.449 "digest": "sha384", 00:19:31.449 "state": "completed" 00:19:31.449 }, 00:19:31.449 "cntlid": 87, 00:19:31.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:31.449 "listen_address": { 00:19:31.449 "adrfam": "IPv4", 00:19:31.449 "traddr": "10.0.0.3", 00:19:31.449 "trsvcid": "4420", 00:19:31.449 "trtype": "TCP" 00:19:31.449 }, 00:19:31.449 "peer_address": { 00:19:31.449 "adrfam": "IPv4", 00:19:31.449 "traddr": "10.0.0.1", 00:19:31.449 "trsvcid": "33324", 00:19:31.449 "trtype": "TCP" 00:19:31.449 }, 00:19:31.449 "qid": 0, 00:19:31.449 "state": "enabled", 00:19:31.449 "thread": "nvmf_tgt_poll_group_000" 00:19:31.449 } 00:19:31.449 ]' 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.449 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.016 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:32.016 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.584 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.842 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.813 00:19:33.813 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.814 { 00:19:33.814 "auth": { 00:19:33.814 "dhgroup": "ffdhe8192", 00:19:33.814 "digest": "sha384", 00:19:33.814 "state": "completed" 00:19:33.814 }, 00:19:33.814 "cntlid": 89, 00:19:33.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:33.814 "listen_address": { 00:19:33.814 "adrfam": "IPv4", 00:19:33.814 "traddr": "10.0.0.3", 00:19:33.814 "trsvcid": "4420", 00:19:33.814 "trtype": "TCP" 00:19:33.814 }, 00:19:33.814 "peer_address": { 00:19:33.814 "adrfam": "IPv4", 00:19:33.814 "traddr": "10.0.0.1", 00:19:33.814 "trsvcid": "39962", 00:19:33.814 "trtype": "TCP" 00:19:33.814 }, 00:19:33.814 "qid": 0, 00:19:33.814 "state": "enabled", 00:19:33.814 "thread": "nvmf_tgt_poll_group_000" 00:19:33.814 } 00:19:33.814 ]' 00:19:33.814 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.073 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.073 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.073 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.073 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.073 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.073 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.073 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.332 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:34.332 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:35.267 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.267 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:35.268 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.268 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.268 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.268 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.268 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.268 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.526 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.093 00:19:36.093 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.093 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.093 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.351 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.351 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.351 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.351 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.352 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.352 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.352 { 00:19:36.352 "auth": { 00:19:36.352 "dhgroup": "ffdhe8192", 00:19:36.352 "digest": "sha384", 00:19:36.352 "state": "completed" 00:19:36.352 }, 00:19:36.352 "cntlid": 91, 00:19:36.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:36.352 "listen_address": { 00:19:36.352 "adrfam": "IPv4", 00:19:36.352 "traddr": "10.0.0.3", 00:19:36.352 "trsvcid": "4420", 00:19:36.352 "trtype": "TCP" 00:19:36.352 }, 00:19:36.352 "peer_address": { 00:19:36.352 "adrfam": "IPv4", 00:19:36.352 "traddr": "10.0.0.1", 00:19:36.352 "trsvcid": "39980", 00:19:36.352 "trtype": "TCP" 00:19:36.352 }, 00:19:36.352 "qid": 0, 00:19:36.352 "state": "enabled", 00:19:36.352 "thread": "nvmf_tgt_poll_group_000" 00:19:36.352 } 00:19:36.352 ]' 00:19:36.352 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.352 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.352 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.352 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.352 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.610 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.610 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.610 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.869 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:36.869 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:37.436 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.694 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:37.694 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.694 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.694 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.694 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.694 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.695 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.957 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.958 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.525 00:19:38.525 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.525 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.525 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.106 { 00:19:39.106 "auth": { 00:19:39.106 "dhgroup": "ffdhe8192", 00:19:39.106 "digest": "sha384", 00:19:39.106 "state": "completed" 00:19:39.106 }, 00:19:39.106 "cntlid": 93, 00:19:39.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:39.106 "listen_address": { 00:19:39.106 "adrfam": "IPv4", 00:19:39.106 "traddr": "10.0.0.3", 00:19:39.106 "trsvcid": "4420", 00:19:39.106 "trtype": "TCP" 00:19:39.106 }, 00:19:39.106 "peer_address": { 00:19:39.106 "adrfam": "IPv4", 00:19:39.106 "traddr": "10.0.0.1", 00:19:39.106 "trsvcid": "40018", 00:19:39.106 "trtype": "TCP" 00:19:39.106 }, 00:19:39.106 "qid": 0, 00:19:39.106 "state": "enabled", 00:19:39.106 "thread": "nvmf_tgt_poll_group_000" 00:19:39.106 } 00:19:39.106 ]' 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.106 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.363 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:39.363 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.295 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.553 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.117 00:19:41.117 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.117 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.117 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.375 { 00:19:41.375 "auth": { 00:19:41.375 "dhgroup": "ffdhe8192", 00:19:41.375 "digest": "sha384", 00:19:41.375 "state": "completed" 00:19:41.375 }, 00:19:41.375 "cntlid": 95, 00:19:41.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:41.375 "listen_address": { 00:19:41.375 "adrfam": "IPv4", 00:19:41.375 "traddr": "10.0.0.3", 00:19:41.375 "trsvcid": "4420", 00:19:41.375 "trtype": "TCP" 00:19:41.375 }, 00:19:41.375 "peer_address": { 00:19:41.375 "adrfam": "IPv4", 00:19:41.375 "traddr": "10.0.0.1", 00:19:41.375 "trsvcid": "40048", 00:19:41.375 "trtype": "TCP" 00:19:41.375 }, 00:19:41.375 "qid": 0, 00:19:41.375 "state": "enabled", 00:19:41.375 "thread": "nvmf_tgt_poll_group_000" 00:19:41.375 } 00:19:41.375 ]' 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.375 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.631 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.631 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.631 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.631 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.631 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.889 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:41.889 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:42.507 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:42.765 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:42.765 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.765 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.765 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:42.765 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.765 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.331 00:19:43.331 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.331 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.331 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.588 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.588 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.588 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.588 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.588 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.588 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.588 { 00:19:43.588 "auth": { 00:19:43.588 "dhgroup": "null", 00:19:43.588 "digest": "sha512", 00:19:43.588 "state": "completed" 00:19:43.588 }, 00:19:43.588 "cntlid": 97, 00:19:43.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:43.588 "listen_address": { 00:19:43.588 "adrfam": "IPv4", 00:19:43.588 "traddr": "10.0.0.3", 00:19:43.588 "trsvcid": "4420", 00:19:43.588 "trtype": "TCP" 00:19:43.588 }, 00:19:43.588 "peer_address": { 00:19:43.588 "adrfam": "IPv4", 00:19:43.588 "traddr": "10.0.0.1", 00:19:43.588 "trsvcid": "39566", 00:19:43.588 "trtype": "TCP" 00:19:43.588 }, 00:19:43.588 "qid": 0, 00:19:43.588 "state": "enabled", 00:19:43.588 "thread": "nvmf_tgt_poll_group_000" 00:19:43.588 } 00:19:43.588 ]' 00:19:43.588 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.588 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.588 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.588 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:43.588 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.846 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.846 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.846 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.104 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:44.104 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:44.670 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.928 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.494 00:19:45.494 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.494 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.494 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.752 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.752 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.752 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.752 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.752 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.752 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.752 { 00:19:45.752 "auth": { 00:19:45.752 "dhgroup": "null", 00:19:45.752 "digest": "sha512", 00:19:45.752 "state": "completed" 00:19:45.752 }, 00:19:45.752 "cntlid": 99, 00:19:45.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:45.752 "listen_address": { 00:19:45.752 "adrfam": "IPv4", 00:19:45.752 "traddr": "10.0.0.3", 00:19:45.752 "trsvcid": "4420", 00:19:45.752 "trtype": "TCP" 00:19:45.752 }, 00:19:45.752 "peer_address": { 00:19:45.752 "adrfam": "IPv4", 00:19:45.752 "traddr": "10.0.0.1", 00:19:45.752 "trsvcid": "39594", 00:19:45.753 "trtype": "TCP" 00:19:45.753 }, 00:19:45.753 "qid": 0, 00:19:45.753 "state": "enabled", 00:19:45.753 "thread": "nvmf_tgt_poll_group_000" 00:19:45.753 } 00:19:45.753 ]' 00:19:45.753 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.753 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.753 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.753 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:46.011 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.011 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.011 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.011 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.270 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:46.270 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.838 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:47.096 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:47.096 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.096 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:47.096 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:47.096 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.096 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.097 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.097 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.097 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.356 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.356 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.356 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.356 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.614 00:19:47.614 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.614 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.614 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.874 { 00:19:47.874 "auth": { 00:19:47.874 "dhgroup": "null", 00:19:47.874 "digest": "sha512", 00:19:47.874 "state": "completed" 00:19:47.874 }, 00:19:47.874 "cntlid": 101, 00:19:47.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:47.874 "listen_address": { 00:19:47.874 "adrfam": "IPv4", 00:19:47.874 "traddr": "10.0.0.3", 00:19:47.874 "trsvcid": "4420", 00:19:47.874 "trtype": "TCP" 00:19:47.874 }, 00:19:47.874 "peer_address": { 00:19:47.874 "adrfam": "IPv4", 00:19:47.874 "traddr": "10.0.0.1", 00:19:47.874 "trsvcid": "39606", 00:19:47.874 "trtype": "TCP" 00:19:47.874 }, 00:19:47.874 "qid": 0, 00:19:47.874 "state": "enabled", 00:19:47.874 "thread": "nvmf_tgt_poll_group_000" 00:19:47.874 } 00:19:47.874 ]' 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.874 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.133 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:48.134 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.134 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.134 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.134 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.393 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:48.393 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:48.960 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.525 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.783 00:19:49.783 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.783 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.783 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.041 { 00:19:50.041 "auth": { 00:19:50.041 "dhgroup": "null", 00:19:50.041 "digest": "sha512", 00:19:50.041 "state": "completed" 00:19:50.041 }, 00:19:50.041 "cntlid": 103, 00:19:50.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:50.041 "listen_address": { 00:19:50.041 "adrfam": "IPv4", 00:19:50.041 "traddr": "10.0.0.3", 00:19:50.041 "trsvcid": "4420", 00:19:50.041 "trtype": "TCP" 00:19:50.041 }, 00:19:50.041 "peer_address": { 00:19:50.041 "adrfam": "IPv4", 00:19:50.041 "traddr": "10.0.0.1", 00:19:50.041 "trsvcid": "39638", 00:19:50.041 "trtype": "TCP" 00:19:50.041 }, 00:19:50.041 "qid": 0, 00:19:50.041 "state": "enabled", 00:19:50.041 "thread": "nvmf_tgt_poll_group_000" 00:19:50.041 } 00:19:50.041 ]' 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.041 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.301 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.301 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.301 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.561 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:50.561 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:51.128 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.459 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.718 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.718 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.718 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.718 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.977 00:19:51.977 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.977 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.977 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.235 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.235 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.235 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.235 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.235 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.235 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.235 { 00:19:52.235 "auth": { 00:19:52.235 "dhgroup": "ffdhe2048", 00:19:52.235 "digest": "sha512", 00:19:52.235 "state": "completed" 00:19:52.235 }, 00:19:52.235 "cntlid": 105, 00:19:52.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:52.235 "listen_address": { 00:19:52.235 "adrfam": "IPv4", 00:19:52.235 "traddr": "10.0.0.3", 00:19:52.235 "trsvcid": "4420", 00:19:52.235 "trtype": "TCP" 00:19:52.235 }, 00:19:52.235 "peer_address": { 00:19:52.235 "adrfam": "IPv4", 00:19:52.235 "traddr": "10.0.0.1", 00:19:52.235 "trsvcid": "39662", 00:19:52.235 "trtype": "TCP" 00:19:52.235 }, 00:19:52.235 "qid": 0, 00:19:52.235 "state": "enabled", 00:19:52.235 "thread": "nvmf_tgt_poll_group_000" 00:19:52.235 } 00:19:52.235 ]' 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.236 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.803 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:52.803 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.370 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.629 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:53.629 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.629 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:53.629 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:53.629 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.629 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.629 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.629 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.629 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.629 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.629 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.629 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.629 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.888 00:19:54.146 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.146 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.146 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.404 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.404 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.404 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.405 { 00:19:54.405 "auth": { 00:19:54.405 "dhgroup": "ffdhe2048", 00:19:54.405 "digest": "sha512", 00:19:54.405 "state": "completed" 00:19:54.405 }, 00:19:54.405 "cntlid": 107, 00:19:54.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:54.405 "listen_address": { 00:19:54.405 "adrfam": "IPv4", 00:19:54.405 "traddr": "10.0.0.3", 00:19:54.405 "trsvcid": "4420", 00:19:54.405 "trtype": "TCP" 00:19:54.405 }, 00:19:54.405 "peer_address": { 00:19:54.405 "adrfam": "IPv4", 00:19:54.405 "traddr": "10.0.0.1", 00:19:54.405 "trsvcid": "53602", 00:19:54.405 "trtype": "TCP" 00:19:54.405 }, 00:19:54.405 "qid": 0, 00:19:54.405 "state": "enabled", 00:19:54.405 "thread": "nvmf_tgt_poll_group_000" 00:19:54.405 } 00:19:54.405 ]' 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.405 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.664 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:54.664 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:55.600 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.859 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.117 00:19:56.117 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.117 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.117 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.376 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.376 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.376 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.376 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.376 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.376 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.376 { 00:19:56.376 "auth": { 00:19:56.376 "dhgroup": "ffdhe2048", 00:19:56.376 "digest": "sha512", 00:19:56.376 "state": "completed" 00:19:56.376 }, 00:19:56.376 "cntlid": 109, 00:19:56.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:56.376 "listen_address": { 00:19:56.376 "adrfam": "IPv4", 00:19:56.376 "traddr": "10.0.0.3", 00:19:56.376 "trsvcid": "4420", 00:19:56.376 "trtype": "TCP" 00:19:56.376 }, 00:19:56.376 "peer_address": { 00:19:56.376 "adrfam": "IPv4", 00:19:56.376 "traddr": "10.0.0.1", 00:19:56.376 "trsvcid": "53634", 00:19:56.376 "trtype": "TCP" 00:19:56.376 }, 00:19:56.376 "qid": 0, 00:19:56.376 "state": "enabled", 00:19:56.376 "thread": "nvmf_tgt_poll_group_000" 00:19:56.376 } 00:19:56.376 ]' 00:19:56.376 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.635 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.635 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.635 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.635 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.635 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.635 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.635 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.893 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:56.893 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:19:57.828 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.828 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:57.828 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.828 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.828 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.828 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.829 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.395 00:19:58.395 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.395 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.395 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.654 { 00:19:58.654 "auth": { 00:19:58.654 "dhgroup": "ffdhe2048", 00:19:58.654 "digest": "sha512", 00:19:58.654 "state": "completed" 00:19:58.654 }, 00:19:58.654 "cntlid": 111, 00:19:58.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:19:58.654 "listen_address": { 00:19:58.654 "adrfam": "IPv4", 00:19:58.654 "traddr": "10.0.0.3", 00:19:58.654 "trsvcid": "4420", 00:19:58.654 "trtype": "TCP" 00:19:58.654 }, 00:19:58.654 "peer_address": { 00:19:58.654 "adrfam": "IPv4", 00:19:58.654 "traddr": "10.0.0.1", 00:19:58.654 "trsvcid": "53674", 00:19:58.654 "trtype": "TCP" 00:19:58.654 }, 00:19:58.654 "qid": 0, 00:19:58.654 "state": "enabled", 00:19:58.654 "thread": "nvmf_tgt_poll_group_000" 00:19:58.654 } 00:19:58.654 ]' 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.654 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.220 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:59.220 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.796 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.055 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.623 00:20:00.623 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.623 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.623 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.881 { 00:20:00.881 "auth": { 00:20:00.881 "dhgroup": "ffdhe3072", 00:20:00.881 "digest": "sha512", 00:20:00.881 "state": "completed" 00:20:00.881 }, 00:20:00.881 "cntlid": 113, 00:20:00.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:00.881 "listen_address": { 00:20:00.881 "adrfam": "IPv4", 00:20:00.881 "traddr": "10.0.0.3", 00:20:00.881 "trsvcid": "4420", 00:20:00.881 "trtype": "TCP" 00:20:00.881 }, 00:20:00.881 "peer_address": { 00:20:00.881 "adrfam": "IPv4", 00:20:00.881 "traddr": "10.0.0.1", 00:20:00.881 "trsvcid": "53706", 00:20:00.881 "trtype": "TCP" 00:20:00.881 }, 00:20:00.881 "qid": 0, 00:20:00.881 "state": "enabled", 00:20:00.881 "thread": "nvmf_tgt_poll_group_000" 00:20:00.881 } 00:20:00.881 ]' 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.881 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.447 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:01.447 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.012 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.270 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:02.270 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.270 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.270 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.270 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.270 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.271 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.271 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.271 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.271 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.271 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.271 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.271 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.837 00:20:02.837 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.837 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.837 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.094 { 00:20:03.094 "auth": { 00:20:03.094 "dhgroup": "ffdhe3072", 00:20:03.094 "digest": "sha512", 00:20:03.094 "state": "completed" 00:20:03.094 }, 00:20:03.094 "cntlid": 115, 00:20:03.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:03.094 "listen_address": { 00:20:03.094 "adrfam": "IPv4", 00:20:03.094 "traddr": "10.0.0.3", 00:20:03.094 "trsvcid": "4420", 00:20:03.094 "trtype": "TCP" 00:20:03.094 }, 00:20:03.094 "peer_address": { 00:20:03.094 "adrfam": "IPv4", 00:20:03.094 "traddr": "10.0.0.1", 00:20:03.094 "trsvcid": "46152", 00:20:03.094 "trtype": "TCP" 00:20:03.094 }, 00:20:03.094 "qid": 0, 00:20:03.094 "state": "enabled", 00:20:03.094 "thread": "nvmf_tgt_poll_group_000" 00:20:03.094 } 00:20:03.094 ]' 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.094 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.352 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.352 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.352 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.352 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.352 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.609 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:03.609 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:04.175 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.175 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:04.175 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.175 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.433 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.433 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.433 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.433 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.691 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.950 00:20:04.950 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.950 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.950 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.516 { 00:20:05.516 "auth": { 00:20:05.516 "dhgroup": "ffdhe3072", 00:20:05.516 "digest": "sha512", 00:20:05.516 "state": "completed" 00:20:05.516 }, 00:20:05.516 "cntlid": 117, 00:20:05.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:05.516 "listen_address": { 00:20:05.516 "adrfam": "IPv4", 00:20:05.516 "traddr": "10.0.0.3", 00:20:05.516 "trsvcid": "4420", 00:20:05.516 "trtype": "TCP" 00:20:05.516 }, 00:20:05.516 "peer_address": { 00:20:05.516 "adrfam": "IPv4", 00:20:05.516 "traddr": "10.0.0.1", 00:20:05.516 "trsvcid": "46180", 00:20:05.516 "trtype": "TCP" 00:20:05.516 }, 00:20:05.516 "qid": 0, 00:20:05.516 "state": "enabled", 00:20:05.516 "thread": "nvmf_tgt_poll_group_000" 00:20:05.516 } 00:20:05.516 ]' 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.516 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.775 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:05.775 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:06.712 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.712 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:06.712 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.712 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.712 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.712 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.712 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.713 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.971 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:06.971 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.971 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:06.971 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.971 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.972 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.538 00:20:07.538 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.538 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.538 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.797 { 00:20:07.797 "auth": { 00:20:07.797 "dhgroup": "ffdhe3072", 00:20:07.797 "digest": "sha512", 00:20:07.797 "state": "completed" 00:20:07.797 }, 00:20:07.797 "cntlid": 119, 00:20:07.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:07.797 "listen_address": { 00:20:07.797 "adrfam": "IPv4", 00:20:07.797 "traddr": "10.0.0.3", 00:20:07.797 "trsvcid": "4420", 00:20:07.797 "trtype": "TCP" 00:20:07.797 }, 00:20:07.797 "peer_address": { 00:20:07.797 "adrfam": "IPv4", 00:20:07.797 "traddr": "10.0.0.1", 00:20:07.797 "trsvcid": "46214", 00:20:07.797 "trtype": "TCP" 00:20:07.797 }, 00:20:07.797 "qid": 0, 00:20:07.797 "state": "enabled", 00:20:07.797 "thread": "nvmf_tgt_poll_group_000" 00:20:07.797 } 00:20:07.797 ]' 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.797 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.365 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:08.365 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:08.933 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.933 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:08.933 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.933 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.933 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.933 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.933 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.934 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:08.934 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.190 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.447 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.447 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.447 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.447 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.706 00:20:09.706 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.706 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.706 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.965 { 00:20:09.965 "auth": { 00:20:09.965 "dhgroup": "ffdhe4096", 00:20:09.965 "digest": "sha512", 00:20:09.965 "state": "completed" 00:20:09.965 }, 00:20:09.965 "cntlid": 121, 00:20:09.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:09.965 "listen_address": { 00:20:09.965 "adrfam": "IPv4", 00:20:09.965 "traddr": "10.0.0.3", 00:20:09.965 "trsvcid": "4420", 00:20:09.965 "trtype": "TCP" 00:20:09.965 }, 00:20:09.965 "peer_address": { 00:20:09.965 "adrfam": "IPv4", 00:20:09.965 "traddr": "10.0.0.1", 00:20:09.965 "trsvcid": "46234", 00:20:09.965 "trtype": "TCP" 00:20:09.965 }, 00:20:09.965 "qid": 0, 00:20:09.965 "state": "enabled", 00:20:09.965 "thread": "nvmf_tgt_poll_group_000" 00:20:09.965 } 00:20:09.965 ]' 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.965 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.225 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.225 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.225 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.225 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.225 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.225 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.486 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:10.486 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.435 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.702 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.969 00:20:11.969 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.969 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.969 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.230 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.230 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.230 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.230 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.230 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.230 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.230 { 00:20:12.230 "auth": { 00:20:12.230 "dhgroup": "ffdhe4096", 00:20:12.230 "digest": "sha512", 00:20:12.230 "state": "completed" 00:20:12.230 }, 00:20:12.230 "cntlid": 123, 00:20:12.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:12.230 "listen_address": { 00:20:12.230 "adrfam": "IPv4", 00:20:12.230 "traddr": "10.0.0.3", 00:20:12.230 "trsvcid": "4420", 00:20:12.230 "trtype": "TCP" 00:20:12.230 }, 00:20:12.230 "peer_address": { 00:20:12.230 "adrfam": "IPv4", 00:20:12.230 "traddr": "10.0.0.1", 00:20:12.230 "trsvcid": "46248", 00:20:12.230 "trtype": "TCP" 00:20:12.230 }, 00:20:12.230 "qid": 0, 00:20:12.230 "state": "enabled", 00:20:12.230 "thread": "nvmf_tgt_poll_group_000" 00:20:12.230 } 00:20:12.230 ]' 00:20:12.230 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.488 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.488 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.488 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.488 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.488 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.488 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.488 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.833 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:12.833 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:13.769 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.769 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.060 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.060 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.060 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.060 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.318 00:20:14.318 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.319 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.319 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.576 { 00:20:14.576 "auth": { 00:20:14.576 "dhgroup": "ffdhe4096", 00:20:14.576 "digest": "sha512", 00:20:14.576 "state": "completed" 00:20:14.576 }, 00:20:14.576 "cntlid": 125, 00:20:14.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:14.576 "listen_address": { 00:20:14.576 "adrfam": "IPv4", 00:20:14.576 "traddr": "10.0.0.3", 00:20:14.576 "trsvcid": "4420", 00:20:14.576 "trtype": "TCP" 00:20:14.576 }, 00:20:14.576 "peer_address": { 00:20:14.576 "adrfam": "IPv4", 00:20:14.576 "traddr": "10.0.0.1", 00:20:14.576 "trsvcid": "38794", 00:20:14.576 "trtype": "TCP" 00:20:14.576 }, 00:20:14.576 "qid": 0, 00:20:14.576 "state": "enabled", 00:20:14.576 "thread": "nvmf_tgt_poll_group_000" 00:20:14.576 } 00:20:14.576 ]' 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.576 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.833 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.834 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.834 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.834 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.834 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.091 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:15.091 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.025 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.283 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.543 00:20:16.544 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.544 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.544 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.806 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.806 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.806 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.806 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.806 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.806 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.806 { 00:20:16.806 "auth": { 00:20:16.807 "dhgroup": "ffdhe4096", 00:20:16.807 "digest": "sha512", 00:20:16.807 "state": "completed" 00:20:16.807 }, 00:20:16.807 "cntlid": 127, 00:20:16.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:16.807 "listen_address": { 00:20:16.807 "adrfam": "IPv4", 00:20:16.807 "traddr": "10.0.0.3", 00:20:16.807 "trsvcid": "4420", 00:20:16.807 "trtype": "TCP" 00:20:16.807 }, 00:20:16.807 "peer_address": { 00:20:16.807 "adrfam": "IPv4", 00:20:16.807 "traddr": "10.0.0.1", 00:20:16.807 "trsvcid": "38816", 00:20:16.807 "trtype": "TCP" 00:20:16.807 }, 00:20:16.807 "qid": 0, 00:20:16.807 "state": "enabled", 00:20:16.807 "thread": "nvmf_tgt_poll_group_000" 00:20:16.807 } 00:20:16.807 ]' 00:20:16.807 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.066 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.066 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.066 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.066 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.066 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.066 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.066 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.325 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:17.325 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.260 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.518 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.083 00:20:19.083 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.083 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.083 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.341 { 00:20:19.341 "auth": { 00:20:19.341 "dhgroup": "ffdhe6144", 00:20:19.341 "digest": "sha512", 00:20:19.341 "state": "completed" 00:20:19.341 }, 00:20:19.341 "cntlid": 129, 00:20:19.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:19.341 "listen_address": { 00:20:19.341 "adrfam": "IPv4", 00:20:19.341 "traddr": "10.0.0.3", 00:20:19.341 "trsvcid": "4420", 00:20:19.341 "trtype": "TCP" 00:20:19.341 }, 00:20:19.341 "peer_address": { 00:20:19.341 "adrfam": "IPv4", 00:20:19.341 "traddr": "10.0.0.1", 00:20:19.341 "trsvcid": "38840", 00:20:19.341 "trtype": "TCP" 00:20:19.341 }, 00:20:19.341 "qid": 0, 00:20:19.341 "state": "enabled", 00:20:19.341 "thread": "nvmf_tgt_poll_group_000" 00:20:19.341 } 00:20:19.341 ]' 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.341 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.599 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.599 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.599 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.599 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.599 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.857 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:19.857 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:20.792 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.793 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:20.793 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.793 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.793 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.793 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.793 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.793 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.051 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.620 00:20:21.620 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.620 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.620 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.881 { 00:20:21.881 "auth": { 00:20:21.881 "dhgroup": "ffdhe6144", 00:20:21.881 "digest": "sha512", 00:20:21.881 "state": "completed" 00:20:21.881 }, 00:20:21.881 "cntlid": 131, 00:20:21.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:21.881 "listen_address": { 00:20:21.881 "adrfam": "IPv4", 00:20:21.881 "traddr": "10.0.0.3", 00:20:21.881 "trsvcid": "4420", 00:20:21.881 "trtype": "TCP" 00:20:21.881 }, 00:20:21.881 "peer_address": { 00:20:21.881 "adrfam": "IPv4", 00:20:21.881 "traddr": "10.0.0.1", 00:20:21.881 "trsvcid": "38868", 00:20:21.881 "trtype": "TCP" 00:20:21.881 }, 00:20:21.881 "qid": 0, 00:20:21.881 "state": "enabled", 00:20:21.881 "thread": "nvmf_tgt_poll_group_000" 00:20:21.881 } 00:20:21.881 ]' 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.881 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.141 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.141 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.141 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.401 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:22.401 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:22.969 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.970 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:22.970 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.970 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.970 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.970 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.970 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.970 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.538 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.109 00:20:24.109 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.109 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.109 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.367 { 00:20:24.367 "auth": { 00:20:24.367 "dhgroup": "ffdhe6144", 00:20:24.367 "digest": "sha512", 00:20:24.367 "state": "completed" 00:20:24.367 }, 00:20:24.367 "cntlid": 133, 00:20:24.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:24.367 "listen_address": { 00:20:24.367 "adrfam": "IPv4", 00:20:24.367 "traddr": "10.0.0.3", 00:20:24.367 "trsvcid": "4420", 00:20:24.367 "trtype": "TCP" 00:20:24.367 }, 00:20:24.367 "peer_address": { 00:20:24.367 "adrfam": "IPv4", 00:20:24.367 "traddr": "10.0.0.1", 00:20:24.367 "trsvcid": "49952", 00:20:24.367 "trtype": "TCP" 00:20:24.367 }, 00:20:24.367 "qid": 0, 00:20:24.367 "state": "enabled", 00:20:24.367 "thread": "nvmf_tgt_poll_group_000" 00:20:24.367 } 00:20:24.367 ]' 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.367 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.934 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:24.934 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.503 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.762 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.330 00:20:26.330 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.330 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.330 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.589 { 00:20:26.589 "auth": { 00:20:26.589 "dhgroup": "ffdhe6144", 00:20:26.589 "digest": "sha512", 00:20:26.589 "state": "completed" 00:20:26.589 }, 00:20:26.589 "cntlid": 135, 00:20:26.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:26.589 "listen_address": { 00:20:26.589 "adrfam": "IPv4", 00:20:26.589 "traddr": "10.0.0.3", 00:20:26.589 "trsvcid": "4420", 00:20:26.589 "trtype": "TCP" 00:20:26.589 }, 00:20:26.589 "peer_address": { 00:20:26.589 "adrfam": "IPv4", 00:20:26.589 "traddr": "10.0.0.1", 00:20:26.589 "trsvcid": "49990", 00:20:26.589 "trtype": "TCP" 00:20:26.589 }, 00:20:26.589 "qid": 0, 00:20:26.589 "state": "enabled", 00:20:26.589 "thread": "nvmf_tgt_poll_group_000" 00:20:26.589 } 00:20:26.589 ]' 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.589 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.848 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.848 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.848 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.848 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.848 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.106 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:27.107 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:27.674 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.258 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.823 00:20:28.823 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.823 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.823 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.080 { 00:20:29.080 "auth": { 00:20:29.080 "dhgroup": "ffdhe8192", 00:20:29.080 "digest": "sha512", 00:20:29.080 "state": "completed" 00:20:29.080 }, 00:20:29.080 "cntlid": 137, 00:20:29.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:29.080 "listen_address": { 00:20:29.080 "adrfam": "IPv4", 00:20:29.080 "traddr": "10.0.0.3", 00:20:29.080 "trsvcid": "4420", 00:20:29.080 "trtype": "TCP" 00:20:29.080 }, 00:20:29.080 "peer_address": { 00:20:29.080 "adrfam": "IPv4", 00:20:29.080 "traddr": "10.0.0.1", 00:20:29.080 "trsvcid": "50014", 00:20:29.080 "trtype": "TCP" 00:20:29.080 }, 00:20:29.080 "qid": 0, 00:20:29.080 "state": "enabled", 00:20:29.080 "thread": "nvmf_tgt_poll_group_000" 00:20:29.080 } 00:20:29.080 ]' 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.080 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.645 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:29.645 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.213 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.778 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:30.778 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.778 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:30.778 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:30.778 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.778 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.778 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.778 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.778 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.778 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.778 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.778 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.778 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.344 00:20:31.344 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.344 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.344 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.657 { 00:20:31.657 "auth": { 00:20:31.657 "dhgroup": "ffdhe8192", 00:20:31.657 "digest": "sha512", 00:20:31.657 "state": "completed" 00:20:31.657 }, 00:20:31.657 "cntlid": 139, 00:20:31.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:31.657 "listen_address": { 00:20:31.657 "adrfam": "IPv4", 00:20:31.657 "traddr": "10.0.0.3", 00:20:31.657 "trsvcid": "4420", 00:20:31.657 "trtype": "TCP" 00:20:31.657 }, 00:20:31.657 "peer_address": { 00:20:31.657 "adrfam": "IPv4", 00:20:31.657 "traddr": "10.0.0.1", 00:20:31.657 "trsvcid": "50042", 00:20:31.657 "trtype": "TCP" 00:20:31.657 }, 00:20:31.657 "qid": 0, 00:20:31.657 "state": "enabled", 00:20:31.657 "thread": "nvmf_tgt_poll_group_000" 00:20:31.657 } 00:20:31.657 ]' 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.657 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.975 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.975 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.975 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.233 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:32.233 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: --dhchap-ctrl-secret DHHC-1:02:YzY2ZmEwN2I5ZGZiZTMzZGRjN2NjYmE2YTJhZDE2NWUxNGJlNDJjY2UwNDQxZWE3/KnQ5g==: 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.799 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.057 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.992 00:20:33.992 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.992 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.992 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.251 { 00:20:34.251 "auth": { 00:20:34.251 "dhgroup": "ffdhe8192", 00:20:34.251 "digest": "sha512", 00:20:34.251 "state": "completed" 00:20:34.251 }, 00:20:34.251 "cntlid": 141, 00:20:34.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:34.251 "listen_address": { 00:20:34.251 "adrfam": "IPv4", 00:20:34.251 "traddr": "10.0.0.3", 00:20:34.251 "trsvcid": "4420", 00:20:34.251 "trtype": "TCP" 00:20:34.251 }, 00:20:34.251 "peer_address": { 00:20:34.251 "adrfam": "IPv4", 00:20:34.251 "traddr": "10.0.0.1", 00:20:34.251 "trsvcid": "60904", 00:20:34.251 "trtype": "TCP" 00:20:34.251 }, 00:20:34.251 "qid": 0, 00:20:34.251 "state": "enabled", 00:20:34.251 "thread": "nvmf_tgt_poll_group_000" 00:20:34.251 } 00:20:34.251 ]' 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.251 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.510 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.510 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.510 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.790 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:34.791 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:01:OGQyMDdhMDdmYzJhMzUwY2YyMjM4ZDMzMzE1MjgyN2N/Iveu: 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:35.372 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:35.631 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:35.631 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.632 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:35.632 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.632 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.632 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.632 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:20:35.632 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.632 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.890 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.891 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.891 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.891 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.480 00:20:36.480 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.480 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.480 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.738 { 00:20:36.738 "auth": { 00:20:36.738 "dhgroup": "ffdhe8192", 00:20:36.738 "digest": "sha512", 00:20:36.738 "state": "completed" 00:20:36.738 }, 00:20:36.738 "cntlid": 143, 00:20:36.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:36.738 "listen_address": { 00:20:36.738 "adrfam": "IPv4", 00:20:36.738 "traddr": "10.0.0.3", 00:20:36.738 "trsvcid": "4420", 00:20:36.738 "trtype": "TCP" 00:20:36.738 }, 00:20:36.738 "peer_address": { 00:20:36.738 "adrfam": "IPv4", 00:20:36.738 "traddr": "10.0.0.1", 00:20:36.738 "trsvcid": "60938", 00:20:36.738 "trtype": "TCP" 00:20:36.738 }, 00:20:36.738 "qid": 0, 00:20:36.738 "state": "enabled", 00:20:36.738 "thread": "nvmf_tgt_poll_group_000" 00:20:36.738 } 00:20:36.738 ]' 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.738 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.996 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.996 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.996 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.996 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.996 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.265 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:37.265 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.204 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.465 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.032 00:20:39.032 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.032 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.032 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.596 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.596 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.596 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.596 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.596 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.596 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.596 { 00:20:39.596 "auth": { 00:20:39.597 "dhgroup": "ffdhe8192", 00:20:39.597 "digest": "sha512", 00:20:39.597 "state": "completed" 00:20:39.597 }, 00:20:39.597 "cntlid": 145, 00:20:39.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:39.597 "listen_address": { 00:20:39.597 "adrfam": "IPv4", 00:20:39.597 "traddr": "10.0.0.3", 00:20:39.597 "trsvcid": "4420", 00:20:39.597 "trtype": "TCP" 00:20:39.597 }, 00:20:39.597 "peer_address": { 00:20:39.597 "adrfam": "IPv4", 00:20:39.597 "traddr": "10.0.0.1", 00:20:39.597 "trsvcid": "60970", 00:20:39.597 "trtype": "TCP" 00:20:39.597 }, 00:20:39.597 "qid": 0, 00:20:39.597 "state": "enabled", 00:20:39.597 "thread": "nvmf_tgt_poll_group_000" 00:20:39.597 } 00:20:39.597 ]' 00:20:39.597 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.597 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.597 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.597 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.597 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.597 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.597 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.597 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.854 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:39.854 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:00:NWQ2Mjk5ODQwYzA2ZTExNTUxM2IwYWM5NjI0OWU3MTU5MTY3ZWFhMzUyYjE1OWJl2HW34g==: --dhchap-ctrl-secret DHHC-1:03:ZGIyMzVkMjkwOTkyNmQ5MDljY2U4YjRhYWVhMWRkMjVhY2UwMDY5MmU0ODViZmIxZGU3NTI1OTZlM2IxY2Q3MHMqj88=: 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:40.789 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:41.358 2024/12/06 13:14:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:41.358 request: 00:20:41.358 { 00:20:41.358 "method": "bdev_nvme_attach_controller", 00:20:41.358 "params": { 00:20:41.358 "name": "nvme0", 00:20:41.358 "trtype": "tcp", 00:20:41.358 "traddr": "10.0.0.3", 00:20:41.358 "adrfam": "ipv4", 00:20:41.358 "trsvcid": "4420", 00:20:41.358 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:41.358 "prchk_reftag": false, 00:20:41.358 "prchk_guard": false, 00:20:41.358 "hdgst": false, 00:20:41.358 "ddgst": false, 00:20:41.358 "dhchap_key": "key2", 00:20:41.358 "allow_unrecognized_csi": false 00:20:41.358 } 00:20:41.358 } 00:20:41.358 Got JSON-RPC error response 00:20:41.358 GoRPCClient: error on JSON-RPC call 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:41.358 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:42.296 2024/12/06 13:14:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:42.296 request: 00:20:42.296 { 00:20:42.296 "method": "bdev_nvme_attach_controller", 00:20:42.296 "params": { 00:20:42.296 "name": "nvme0", 00:20:42.296 "trtype": "tcp", 00:20:42.296 "traddr": "10.0.0.3", 00:20:42.296 "adrfam": "ipv4", 00:20:42.296 "trsvcid": "4420", 00:20:42.296 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:42.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:42.296 "prchk_reftag": false, 00:20:42.296 "prchk_guard": false, 00:20:42.296 "hdgst": false, 00:20:42.296 "ddgst": false, 00:20:42.296 "dhchap_key": "key1", 00:20:42.296 "dhchap_ctrlr_key": "ckey2", 00:20:42.296 "allow_unrecognized_csi": false 00:20:42.296 } 00:20:42.296 } 00:20:42.296 Got JSON-RPC error response 00:20:42.296 GoRPCClient: error on JSON-RPC call 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.296 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.865 2024/12/06 13:14:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:42.865 request: 00:20:42.865 { 00:20:42.865 "method": "bdev_nvme_attach_controller", 00:20:42.865 "params": { 00:20:42.865 "name": "nvme0", 00:20:42.865 "trtype": "tcp", 00:20:42.865 "traddr": "10.0.0.3", 00:20:42.865 "adrfam": "ipv4", 00:20:42.865 "trsvcid": "4420", 00:20:42.865 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:42.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:42.865 "prchk_reftag": false, 00:20:42.865 "prchk_guard": false, 00:20:42.865 "hdgst": false, 00:20:42.865 "ddgst": false, 00:20:42.865 "dhchap_key": "key1", 00:20:42.865 "dhchap_ctrlr_key": "ckey1", 00:20:42.865 "allow_unrecognized_csi": false 00:20:42.865 } 00:20:42.865 } 00:20:42.865 Got JSON-RPC error response 00:20:42.865 GoRPCClient: error on JSON-RPC call 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76719 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76719 ']' 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76719 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76719 00:20:42.865 killing process with pid 76719 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76719' 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76719 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76719 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81749 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81749 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81749 ']' 00:20:42.865 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.866 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.866 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.866 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.866 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81749 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81749 ']' 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.434 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.693 null0 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aKm 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.VQR ]] 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VQR 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.693 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Qnt 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.kQz ]] 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kQz 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Y5K 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.MQ3 ]] 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MQ3 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.694 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ivT 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.952 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.889 nvme0n1 00:20:44.889 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.889 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.889 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.147 { 00:20:45.147 "auth": { 00:20:45.147 "dhgroup": "ffdhe8192", 00:20:45.147 "digest": "sha512", 00:20:45.147 "state": "completed" 00:20:45.147 }, 00:20:45.147 "cntlid": 1, 00:20:45.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:45.147 "listen_address": { 00:20:45.147 "adrfam": "IPv4", 00:20:45.147 "traddr": "10.0.0.3", 00:20:45.147 "trsvcid": "4420", 00:20:45.147 "trtype": "TCP" 00:20:45.147 }, 00:20:45.147 "peer_address": { 00:20:45.147 "adrfam": "IPv4", 00:20:45.147 "traddr": "10.0.0.1", 00:20:45.147 "trsvcid": "41376", 00:20:45.147 "trtype": "TCP" 00:20:45.147 }, 00:20:45.147 "qid": 0, 00:20:45.147 "state": "enabled", 00:20:45.147 "thread": "nvmf_tgt_poll_group_000" 00:20:45.147 } 00:20:45.147 ]' 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.147 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.407 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.407 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.407 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.407 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.407 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.407 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.665 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:45.665 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key3 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:46.600 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.858 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.117 2024/12/06 13:14:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:47.117 request: 00:20:47.117 { 00:20:47.117 "method": "bdev_nvme_attach_controller", 00:20:47.117 "params": { 00:20:47.117 "name": "nvme0", 00:20:47.117 "trtype": "tcp", 00:20:47.117 "traddr": "10.0.0.3", 00:20:47.117 "adrfam": "ipv4", 00:20:47.117 "trsvcid": "4420", 00:20:47.117 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:47.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:47.117 "prchk_reftag": false, 00:20:47.117 "prchk_guard": false, 00:20:47.117 "hdgst": false, 00:20:47.117 "ddgst": false, 00:20:47.117 "dhchap_key": "key3", 00:20:47.117 "allow_unrecognized_csi": false 00:20:47.117 } 00:20:47.117 } 00:20:47.117 Got JSON-RPC error response 00:20:47.117 GoRPCClient: error on JSON-RPC call 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:47.117 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.683 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.940 2024/12/06 13:14:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:47.940 request: 00:20:47.940 { 00:20:47.940 "method": "bdev_nvme_attach_controller", 00:20:47.940 "params": { 00:20:47.940 "name": "nvme0", 00:20:47.940 "trtype": "tcp", 00:20:47.940 "traddr": "10.0.0.3", 00:20:47.940 "adrfam": "ipv4", 00:20:47.940 "trsvcid": "4420", 00:20:47.940 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:47.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:47.940 "prchk_reftag": false, 00:20:47.940 "prchk_guard": false, 00:20:47.940 "hdgst": false, 00:20:47.940 "ddgst": false, 00:20:47.940 "dhchap_key": "key3", 00:20:47.940 "allow_unrecognized_csi": false 00:20:47.940 } 00:20:47.940 } 00:20:47.940 Got JSON-RPC error response 00:20:47.940 GoRPCClient: error on JSON-RPC call 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.940 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.197 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:48.198 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:48.198 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:48.765 2024/12/06 13:14:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:48.765 request: 00:20:48.765 { 00:20:48.765 "method": "bdev_nvme_attach_controller", 00:20:48.765 "params": { 00:20:48.765 "name": "nvme0", 00:20:48.765 "trtype": "tcp", 00:20:48.765 "traddr": "10.0.0.3", 00:20:48.765 "adrfam": "ipv4", 00:20:48.765 "trsvcid": "4420", 00:20:48.765 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:48.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:48.765 "prchk_reftag": false, 00:20:48.765 "prchk_guard": false, 00:20:48.765 "hdgst": false, 00:20:48.765 "ddgst": false, 00:20:48.765 "dhchap_key": "key0", 00:20:48.765 "dhchap_ctrlr_key": "key1", 00:20:48.765 "allow_unrecognized_csi": false 00:20:48.765 } 00:20:48.765 } 00:20:48.765 Got JSON-RPC error response 00:20:48.765 GoRPCClient: error on JSON-RPC call 00:20:48.765 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:48.765 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.765 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.766 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.766 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:48.766 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:48.766 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:49.024 nvme0n1 00:20:49.024 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:49.024 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.024 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:49.282 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.282 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.282 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.847 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 00:20:49.847 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.847 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.847 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.847 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:49.847 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:49.847 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:50.783 nvme0n1 00:20:50.783 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:50.783 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.783 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.351 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:51.659 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.659 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:51.659 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid 78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -l 0 --dhchap-secret DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: --dhchap-ctrl-secret DHHC-1:03:NmI5ZmQ3NzQ2ZWQ4YjkwODFlZjMyZTA2NDc1YzRlYTk0ODNhODQ3ZDEzNjJiMDBlZGM2MGNjNjU0YTgxYzc0ZJ5D6T8=: 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.242 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.526 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:52.527 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:52.527 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:53.463 2024/12/06 13:14:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:53.463 request: 00:20:53.463 { 00:20:53.463 "method": "bdev_nvme_attach_controller", 00:20:53.463 "params": { 00:20:53.463 "name": "nvme0", 00:20:53.463 "trtype": "tcp", 00:20:53.463 "traddr": "10.0.0.3", 00:20:53.463 "adrfam": "ipv4", 00:20:53.463 "trsvcid": "4420", 00:20:53.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:53.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2", 00:20:53.463 "prchk_reftag": false, 00:20:53.463 "prchk_guard": false, 00:20:53.463 "hdgst": false, 00:20:53.463 "ddgst": false, 00:20:53.463 "dhchap_key": "key1", 00:20:53.463 "allow_unrecognized_csi": false 00:20:53.463 } 00:20:53.463 } 00:20:53.463 Got JSON-RPC error response 00:20:53.463 GoRPCClient: error on JSON-RPC call 00:20:53.463 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:53.463 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:53.463 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:53.463 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:53.463 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:53.463 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:53.463 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:54.399 nvme0n1 00:20:54.399 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:54.399 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.399 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:54.659 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.659 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.659 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.917 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:20:54.917 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.917 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.917 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.917 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:54.917 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:54.917 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:55.483 nvme0n1 00:20:55.483 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:55.483 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:55.483 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.741 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.741 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.741 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: '' 2s 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: ]] 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzgwNzQwZDQyOWFjZjNlZWQ0OTYwM2NiZTk5MGE2MGZqwzjO: 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:56.000 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.531 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: 2s 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: ]] 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzNjZTlhZmY1MmYxNDdhZDhjM2I4NDdhNjEzNmFiNzNkNTc2MDc4MTM2MmUwODcwBmbs8A==: 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:58.532 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:00.482 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:01.419 nvme0n1 00:21:01.419 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.419 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.419 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.419 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.419 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.420 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.988 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:01.988 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:01.988 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.247 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.247 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:21:02.247 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.247 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.247 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.248 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:02.248 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:02.816 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:02.816 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.816 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:03.076 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:04.012 2024/12/06 13:15:10 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:04.012 request: 00:21:04.012 { 00:21:04.012 "method": "bdev_nvme_set_keys", 00:21:04.012 "params": { 00:21:04.012 "name": "nvme0", 00:21:04.012 "dhchap_key": "key1", 00:21:04.012 "dhchap_ctrlr_key": "key3" 00:21:04.012 } 00:21:04.012 } 00:21:04.012 Got JSON-RPC error response 00:21:04.012 GoRPCClient: error on JSON-RPC call 00:21:04.012 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:04.012 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.012 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.012 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.012 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:04.012 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.012 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:04.270 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:04.270 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:05.205 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:05.205 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:05.205 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:05.555 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:06.928 nvme0n1 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:06.928 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:07.493 2024/12/06 13:15:13 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:07.493 request: 00:21:07.493 { 00:21:07.493 "method": "bdev_nvme_set_keys", 00:21:07.493 "params": { 00:21:07.493 "name": "nvme0", 00:21:07.493 "dhchap_key": "key2", 00:21:07.493 "dhchap_ctrlr_key": "key0" 00:21:07.493 } 00:21:07.493 } 00:21:07.493 Got JSON-RPC error response 00:21:07.493 GoRPCClient: error on JSON-RPC call 00:21:07.493 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:07.493 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:07.493 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:07.493 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:07.493 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:07.493 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:07.493 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.752 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:07.752 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76763 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76763 ']' 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76763 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76763 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.126 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.126 killing process with pid 76763 00:21:09.127 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76763' 00:21:09.127 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76763 00:21:09.127 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76763 00:21:09.384 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:09.384 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.384 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:09.385 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.385 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:09.385 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.385 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.385 rmmod nvme_tcp 00:21:09.670 rmmod nvme_fabrics 00:21:09.670 rmmod nvme_keyring 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81749 ']' 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81749 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81749 ']' 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81749 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81749 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.670 killing process with pid 81749 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81749' 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81749 00:21:09.670 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81749 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:09.670 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aKm /tmp/spdk.key-sha256.Qnt /tmp/spdk.key-sha384.Y5K /tmp/spdk.key-sha512.ivT /tmp/spdk.key-sha512.VQR /tmp/spdk.key-sha384.kQz /tmp/spdk.key-sha256.MQ3 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:21:09.928 00:21:09.928 real 3m25.292s 00:21:09.928 user 8m21.611s 00:21:09.928 sys 0m23.744s 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.928 ************************************ 00:21:09.928 END TEST nvmf_auth_target 00:21:09.928 ************************************ 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:09.928 13:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.188 ************************************ 00:21:10.188 START TEST nvmf_bdevio_no_huge 00:21:10.188 ************************************ 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:10.188 * Looking for test storage... 00:21:10.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.188 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.188 --rc genhtml_branch_coverage=1 00:21:10.188 --rc genhtml_function_coverage=1 00:21:10.188 --rc genhtml_legend=1 00:21:10.188 --rc geninfo_all_blocks=1 00:21:10.188 --rc geninfo_unexecuted_blocks=1 00:21:10.189 00:21:10.189 ' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.189 --rc genhtml_branch_coverage=1 00:21:10.189 --rc genhtml_function_coverage=1 00:21:10.189 --rc genhtml_legend=1 00:21:10.189 --rc geninfo_all_blocks=1 00:21:10.189 --rc geninfo_unexecuted_blocks=1 00:21:10.189 00:21:10.189 ' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.189 --rc genhtml_branch_coverage=1 00:21:10.189 --rc genhtml_function_coverage=1 00:21:10.189 --rc genhtml_legend=1 00:21:10.189 --rc geninfo_all_blocks=1 00:21:10.189 --rc geninfo_unexecuted_blocks=1 00:21:10.189 00:21:10.189 ' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.189 --rc genhtml_branch_coverage=1 00:21:10.189 --rc genhtml_function_coverage=1 00:21:10.189 --rc genhtml_legend=1 00:21:10.189 --rc geninfo_all_blocks=1 00:21:10.189 --rc geninfo_unexecuted_blocks=1 00:21:10.189 00:21:10.189 ' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.189 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:10.189 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:10.190 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:10.190 Cannot find device "nvmf_init_br" 00:21:10.190 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:21:10.190 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:10.190 Cannot find device "nvmf_init_br2" 00:21:10.190 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:21:10.190 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:10.448 Cannot find device "nvmf_tgt_br" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.448 Cannot find device "nvmf_tgt_br2" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:10.448 Cannot find device "nvmf_init_br" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:10.448 Cannot find device "nvmf_init_br2" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:10.448 Cannot find device "nvmf_tgt_br" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:10.448 Cannot find device "nvmf_tgt_br2" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:10.448 Cannot find device "nvmf_br" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:10.448 Cannot find device "nvmf_init_if" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:10.448 Cannot find device "nvmf_init_if2" 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:10.448 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:10.707 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:10.707 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:10.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:10.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:21:10.707 00:21:10.707 --- 10.0.0.3 ping statistics --- 00:21:10.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.707 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:10.707 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:10.707 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:21:10.707 00:21:10.707 --- 10.0.0.4 ping statistics --- 00:21:10.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.707 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:10.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:10.707 00:21:10.707 --- 10.0.0.1 ping statistics --- 00:21:10.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.707 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:10.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:21:10.707 00:21:10.707 --- 10.0.0.2 ping statistics --- 00:21:10.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.707 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82626 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82626 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82626 ']' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.707 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:10.707 [2024-12-06 13:15:17.128309] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:10.707 [2024-12-06 13:15:17.128400] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:10.965 [2024-12-06 13:15:17.284029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.965 [2024-12-06 13:15:17.358207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.965 [2024-12-06 13:15:17.358268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.965 [2024-12-06 13:15:17.358282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.965 [2024-12-06 13:15:17.358292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.966 [2024-12-06 13:15:17.358301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.966 [2024-12-06 13:15:17.359081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:10.966 [2024-12-06 13:15:17.359159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:10.966 [2024-12-06 13:15:17.360978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:10.966 [2024-12-06 13:15:17.361029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.223 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.223 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:11.223 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.223 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.223 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.224 [2024-12-06 13:15:17.556754] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.224 Malloc0 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.224 [2024-12-06 13:15:17.601153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.224 { 00:21:11.224 "params": { 00:21:11.224 "name": "Nvme$subsystem", 00:21:11.224 "trtype": "$TEST_TRANSPORT", 00:21:11.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.224 "adrfam": "ipv4", 00:21:11.224 "trsvcid": "$NVMF_PORT", 00:21:11.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.224 "hdgst": ${hdgst:-false}, 00:21:11.224 "ddgst": ${ddgst:-false} 00:21:11.224 }, 00:21:11.224 "method": "bdev_nvme_attach_controller" 00:21:11.224 } 00:21:11.224 EOF 00:21:11.224 )") 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:11.224 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:11.224 "params": { 00:21:11.224 "name": "Nvme1", 00:21:11.224 "trtype": "tcp", 00:21:11.224 "traddr": "10.0.0.3", 00:21:11.224 "adrfam": "ipv4", 00:21:11.224 "trsvcid": "4420", 00:21:11.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.224 "hdgst": false, 00:21:11.224 "ddgst": false 00:21:11.224 }, 00:21:11.224 "method": "bdev_nvme_attach_controller" 00:21:11.224 }' 00:21:11.224 [2024-12-06 13:15:17.662471] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:11.224 [2024-12-06 13:15:17.662562] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82666 ] 00:21:11.482 [2024-12-06 13:15:17.820313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:11.482 [2024-12-06 13:15:17.928537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.482 [2024-12-06 13:15:17.928652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.482 [2024-12-06 13:15:17.928669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.740 I/O targets: 00:21:11.740 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:11.740 00:21:11.740 00:21:11.740 CUnit - A unit testing framework for C - Version 2.1-3 00:21:11.740 http://cunit.sourceforge.net/ 00:21:11.740 00:21:11.740 00:21:11.740 Suite: bdevio tests on: Nvme1n1 00:21:11.740 Test: blockdev write read block ...passed 00:21:11.740 Test: blockdev write zeroes read block ...passed 00:21:11.997 Test: blockdev write zeroes read no split ...passed 00:21:11.997 Test: blockdev write zeroes read split ...passed 00:21:11.997 Test: blockdev write zeroes read split partial ...passed 00:21:11.998 Test: blockdev reset ...[2024-12-06 13:15:18.299840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:11.998 [2024-12-06 13:15:18.299978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ea720 (9): Bad file descriptor 00:21:11.998 [2024-12-06 13:15:18.311223] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:11.998 passed 00:21:11.998 Test: blockdev write read 8 blocks ...passed 00:21:11.998 Test: blockdev write read size > 128k ...passed 00:21:11.998 Test: blockdev write read invalid size ...passed 00:21:11.998 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.998 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.998 Test: blockdev write read max offset ...passed 00:21:11.998 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.998 Test: blockdev writev readv 8 blocks ...passed 00:21:11.998 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.998 Test: blockdev writev readv block ...passed 00:21:11.998 Test: blockdev writev readv size > 128k ...passed 00:21:11.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.998 Test: blockdev comparev and writev ...[2024-12-06 13:15:18.485759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.485808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.998 [2024-12-06 13:15:18.485829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.485839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:11.998 [2024-12-06 13:15:18.486381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.486403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:11.998 [2024-12-06 13:15:18.486420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.486431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:11.998 [2024-12-06 13:15:18.486800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.486828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:11.998 [2024-12-06 13:15:18.486846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.486856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:11.998 [2024-12-06 13:15:18.487381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.487409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:11.998 [2024-12-06 13:15:18.487427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.998 [2024-12-06 13:15:18.487437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:12.257 passed 00:21:12.257 Test: blockdev nvme passthru rw ...passed 00:21:12.257 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:15:18.570336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.257 [2024-12-06 13:15:18.570390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:12.257 [2024-12-06 13:15:18.570529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.257 [2024-12-06 13:15:18.570551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:12.257 [2024-12-06 13:15:18.570676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.257 [2024-12-06 13:15:18.570697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:12.257 [2024-12-06 13:15:18.570816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.257 [2024-12-06 13:15:18.570837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:12.257 passed 00:21:12.257 Test: blockdev nvme admin passthru ...passed 00:21:12.257 Test: blockdev copy ...passed 00:21:12.257 00:21:12.257 Run Summary: Type Total Ran Passed Failed Inactive 00:21:12.257 suites 1 1 n/a 0 0 00:21:12.257 tests 23 23 23 0 0 00:21:12.257 asserts 152 152 152 0 n/a 00:21:12.257 00:21:12.257 Elapsed time = 0.924 seconds 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.515 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.774 rmmod nvme_tcp 00:21:12.774 rmmod nvme_fabrics 00:21:12.774 rmmod nvme_keyring 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82626 ']' 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82626 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82626 ']' 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82626 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82626 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:12.774 killing process with pid 82626 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82626' 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82626 00:21:12.774 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82626 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:13.032 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:21:13.290 00:21:13.290 real 0m3.326s 00:21:13.290 user 0m10.878s 00:21:13.290 sys 0m1.387s 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.290 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:13.290 ************************************ 00:21:13.290 END TEST nvmf_bdevio_no_huge 00:21:13.290 ************************************ 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.548 ************************************ 00:21:13.548 START TEST nvmf_tls 00:21:13.548 ************************************ 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.548 * Looking for test storage... 00:21:13.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:21:13.548 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:13.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.548 --rc genhtml_branch_coverage=1 00:21:13.548 --rc genhtml_function_coverage=1 00:21:13.548 --rc genhtml_legend=1 00:21:13.548 --rc geninfo_all_blocks=1 00:21:13.548 --rc geninfo_unexecuted_blocks=1 00:21:13.548 00:21:13.548 ' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:13.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.548 --rc genhtml_branch_coverage=1 00:21:13.548 --rc genhtml_function_coverage=1 00:21:13.548 --rc genhtml_legend=1 00:21:13.548 --rc geninfo_all_blocks=1 00:21:13.548 --rc geninfo_unexecuted_blocks=1 00:21:13.548 00:21:13.548 ' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:13.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.548 --rc genhtml_branch_coverage=1 00:21:13.548 --rc genhtml_function_coverage=1 00:21:13.548 --rc genhtml_legend=1 00:21:13.548 --rc geninfo_all_blocks=1 00:21:13.548 --rc geninfo_unexecuted_blocks=1 00:21:13.548 00:21:13.548 ' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:13.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.548 --rc genhtml_branch_coverage=1 00:21:13.548 --rc genhtml_function_coverage=1 00:21:13.548 --rc genhtml_legend=1 00:21:13.548 --rc geninfo_all_blocks=1 00:21:13.548 --rc geninfo_unexecuted_blocks=1 00:21:13.548 00:21:13.548 ' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.548 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.548 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:13.549 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:13.806 Cannot find device "nvmf_init_br" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:13.806 Cannot find device "nvmf_init_br2" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:13.806 Cannot find device "nvmf_tgt_br" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.806 Cannot find device "nvmf_tgt_br2" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:13.806 Cannot find device "nvmf_init_br" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:13.806 Cannot find device "nvmf_init_br2" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:13.806 Cannot find device "nvmf_tgt_br" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:13.806 Cannot find device "nvmf_tgt_br2" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:13.806 Cannot find device "nvmf_br" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:13.806 Cannot find device "nvmf_init_if" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:13.806 Cannot find device "nvmf_init_if2" 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:13.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:13.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:13.806 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:14.068 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.068 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:21:14.068 00:21:14.068 --- 10.0.0.3 ping statistics --- 00:21:14.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.068 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:14.068 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:14.068 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:14.068 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:21:14.068 00:21:14.068 --- 10.0.0.4 ping statistics --- 00:21:14.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.068 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:14.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:14.069 00:21:14.069 --- 10.0.0.1 ping statistics --- 00:21:14.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.069 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:14.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:14.069 00:21:14.069 --- 10.0.0.2 ping statistics --- 00:21:14.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.069 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82904 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82904 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82904 ']' 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.069 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.069 [2024-12-06 13:15:20.547211] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:14.069 [2024-12-06 13:15:20.547312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.332 [2024-12-06 13:15:20.704074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.333 [2024-12-06 13:15:20.743723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.333 [2024-12-06 13:15:20.743785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.333 [2024-12-06 13:15:20.743799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.333 [2024-12-06 13:15:20.743809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.333 [2024-12-06 13:15:20.743818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.333 [2024-12-06 13:15:20.744183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.333 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.333 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.333 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.333 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.333 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.590 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.590 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:14.590 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:14.848 true 00:21:14.848 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.848 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:15.105 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:15.105 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:15.105 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:15.363 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:15.363 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:15.928 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:15.928 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:15.928 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:16.186 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:16.186 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:16.443 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:16.443 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:16.443 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:16.443 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:16.701 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:16.701 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:16.701 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:16.959 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:16.959 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:17.524 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:17.524 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:17.524 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:17.783 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:17.783 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.H6yxRCG5gN 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.JLkpQfrZbN 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.H6yxRCG5gN 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.JLkpQfrZbN 00:21:18.042 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:18.609 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:18.867 13:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.H6yxRCG5gN 00:21:18.867 13:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H6yxRCG5gN 00:21:18.867 13:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:19.434 [2024-12-06 13:15:25.656402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.434 13:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:19.692 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:19.951 [2024-12-06 13:15:26.316597] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.951 [2024-12-06 13:15:26.316844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:19.951 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:20.210 malloc0 00:21:20.210 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:20.468 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H6yxRCG5gN 00:21:21.034 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.291 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.H6yxRCG5gN 00:21:33.487 Initializing NVMe Controllers 00:21:33.487 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:33.487 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:33.487 Initialization complete. Launching workers. 00:21:33.487 ======================================================== 00:21:33.487 Latency(us) 00:21:33.487 Device Information : IOPS MiB/s Average min max 00:21:33.487 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9160.49 35.78 6988.25 1498.54 11111.48 00:21:33.487 ======================================================== 00:21:33.487 Total : 9160.49 35.78 6988.25 1498.54 11111.48 00:21:33.487 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H6yxRCG5gN 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H6yxRCG5gN 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83274 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83274 /var/tmp/bdevperf.sock 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83274 ']' 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.487 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.487 [2024-12-06 13:15:37.910881] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:33.487 [2024-12-06 13:15:37.911014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83274 ] 00:21:33.487 [2024-12-06 13:15:38.057935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.487 [2024-12-06 13:15:38.099329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.487 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.487 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:33.487 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H6yxRCG5gN 00:21:33.487 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:33.488 [2024-12-06 13:15:38.807291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.488 TLSTESTn1 00:21:33.488 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:33.488 Running I/O for 10 seconds... 00:21:34.678 3551.00 IOPS, 13.87 MiB/s [2024-12-06T13:15:42.158Z] 3629.50 IOPS, 14.18 MiB/s [2024-12-06T13:15:43.093Z] 3690.67 IOPS, 14.42 MiB/s [2024-12-06T13:15:44.471Z] 3697.50 IOPS, 14.44 MiB/s [2024-12-06T13:15:45.405Z] 3756.60 IOPS, 14.67 MiB/s [2024-12-06T13:15:46.337Z] 3789.33 IOPS, 14.80 MiB/s [2024-12-06T13:15:47.272Z] 3800.00 IOPS, 14.84 MiB/s [2024-12-06T13:15:48.203Z] 3815.38 IOPS, 14.90 MiB/s [2024-12-06T13:15:49.137Z] 3844.78 IOPS, 15.02 MiB/s [2024-12-06T13:15:49.137Z] 3863.80 IOPS, 15.09 MiB/s 00:21:42.609 Latency(us) 00:21:42.609 [2024-12-06T13:15:49.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.609 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:42.609 Verification LBA range: start 0x0 length 0x2000 00:21:42.609 TLSTESTn1 : 10.02 3870.25 15.12 0.00 0.00 33012.71 5510.98 30265.72 00:21:42.609 [2024-12-06T13:15:49.137Z] =================================================================================================================== 00:21:42.609 [2024-12-06T13:15:49.137Z] Total : 3870.25 15.12 0.00 0.00 33012.71 5510.98 30265.72 00:21:42.609 { 00:21:42.609 "results": [ 00:21:42.609 { 00:21:42.609 "job": "TLSTESTn1", 00:21:42.609 "core_mask": "0x4", 00:21:42.609 "workload": "verify", 00:21:42.609 "status": "finished", 00:21:42.609 "verify_range": { 00:21:42.609 "start": 0, 00:21:42.609 "length": 8192 00:21:42.609 }, 00:21:42.609 "queue_depth": 128, 00:21:42.609 "io_size": 4096, 00:21:42.609 "runtime": 10.016401, 00:21:42.609 "iops": 3870.252399040334, 00:21:42.609 "mibps": 15.118173433751304, 00:21:42.609 "io_failed": 0, 00:21:42.609 "io_timeout": 0, 00:21:42.609 "avg_latency_us": 33012.711856781716, 00:21:42.609 "min_latency_us": 5510.981818181818, 00:21:42.609 "max_latency_us": 30265.716363636362 00:21:42.609 } 00:21:42.609 ], 00:21:42.609 "core_count": 1 00:21:42.609 } 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83274 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83274 ']' 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83274 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83274 00:21:42.609 killing process with pid 83274 00:21:42.609 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.609 00:21:42.609 Latency(us) 00:21:42.609 [2024-12-06T13:15:49.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.609 [2024-12-06T13:15:49.137Z] =================================================================================================================== 00:21:42.609 [2024-12-06T13:15:49.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83274' 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83274 00:21:42.609 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83274 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JLkpQfrZbN 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JLkpQfrZbN 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JLkpQfrZbN 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JLkpQfrZbN 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83420 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83420 /var/tmp/bdevperf.sock 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83420 ']' 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.867 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.867 [2024-12-06 13:15:49.312105] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:42.867 [2024-12-06 13:15:49.312199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83420 ] 00:21:43.125 [2024-12-06 13:15:49.459348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.125 [2024-12-06 13:15:49.492877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.125 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.125 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:43.125 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JLkpQfrZbN 00:21:43.382 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:43.639 [2024-12-06 13:15:50.102601] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.639 [2024-12-06 13:15:50.107701] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:43.639 [2024-12-06 13:15:50.108298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb806a0 (107): Transport endpoint is not connected 00:21:43.639 [2024-12-06 13:15:50.109285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb806a0 (9): Bad file descriptor 00:21:43.639 [2024-12-06 13:15:50.110281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:43.639 [2024-12-06 13:15:50.110306] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:21:43.639 [2024-12-06 13:15:50.110318] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:43.639 [2024-12-06 13:15:50.110334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:43.639 2024/12/06 13:15:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:43.639 request: 00:21:43.639 { 00:21:43.639 "method": "bdev_nvme_attach_controller", 00:21:43.639 "params": { 00:21:43.639 "name": "TLSTEST", 00:21:43.639 "trtype": "tcp", 00:21:43.639 "traddr": "10.0.0.3", 00:21:43.639 "adrfam": "ipv4", 00:21:43.639 "trsvcid": "4420", 00:21:43.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.639 "prchk_reftag": false, 00:21:43.639 "prchk_guard": false, 00:21:43.639 "hdgst": false, 00:21:43.639 "ddgst": false, 00:21:43.639 "psk": "key0", 00:21:43.639 "allow_unrecognized_csi": false 00:21:43.639 } 00:21:43.639 } 00:21:43.639 Got JSON-RPC error response 00:21:43.639 GoRPCClient: error on JSON-RPC call 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83420 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83420 ']' 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83420 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83420 00:21:43.639 killing process with pid 83420 00:21:43.639 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.639 00:21:43.639 Latency(us) 00:21:43.639 [2024-12-06T13:15:50.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.639 [2024-12-06T13:15:50.167Z] =================================================================================================================== 00:21:43.639 [2024-12-06T13:15:50.167Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83420' 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83420 00:21:43.639 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83420 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H6yxRCG5gN 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H6yxRCG5gN 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H6yxRCG5gN 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H6yxRCG5gN 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83467 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83467 /var/tmp/bdevperf.sock 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83467 ']' 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.896 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.896 [2024-12-06 13:15:50.352879] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:43.896 [2024-12-06 13:15:50.352980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83467 ] 00:21:44.154 [2024-12-06 13:15:50.495756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.154 [2024-12-06 13:15:50.529274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.154 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.154 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:44.154 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H6yxRCG5gN 00:21:44.412 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:44.669 [2024-12-06 13:15:51.127823] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.669 [2024-12-06 13:15:51.136746] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:44.669 [2024-12-06 13:15:51.136788] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:44.669 [2024-12-06 13:15:51.136842] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:44.669 [2024-12-06 13:15:51.137451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b16a0 (107): Transport endpoint is not connected 00:21:44.669 [2024-12-06 13:15:51.138439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b16a0 (9): Bad file descriptor 00:21:44.669 [2024-12-06 13:15:51.139437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:44.669 [2024-12-06 13:15:51.139464] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:21:44.669 [2024-12-06 13:15:51.139476] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:44.669 [2024-12-06 13:15:51.139493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:44.669 2024/12/06 13:15:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:44.669 request: 00:21:44.669 { 00:21:44.669 "method": "bdev_nvme_attach_controller", 00:21:44.669 "params": { 00:21:44.669 "name": "TLSTEST", 00:21:44.669 "trtype": "tcp", 00:21:44.669 "traddr": "10.0.0.3", 00:21:44.669 "adrfam": "ipv4", 00:21:44.669 "trsvcid": "4420", 00:21:44.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.669 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:44.669 "prchk_reftag": false, 00:21:44.669 "prchk_guard": false, 00:21:44.669 "hdgst": false, 00:21:44.669 "ddgst": false, 00:21:44.669 "psk": "key0", 00:21:44.669 "allow_unrecognized_csi": false 00:21:44.669 } 00:21:44.669 } 00:21:44.669 Got JSON-RPC error response 00:21:44.669 GoRPCClient: error on JSON-RPC call 00:21:44.669 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83467 00:21:44.669 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83467 ']' 00:21:44.670 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83467 00:21:44.670 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:44.670 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.670 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83467 00:21:44.927 killing process with pid 83467 00:21:44.927 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.927 00:21:44.927 Latency(us) 00:21:44.927 [2024-12-06T13:15:51.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.927 [2024-12-06T13:15:51.455Z] =================================================================================================================== 00:21:44.927 [2024-12-06T13:15:51.455Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83467' 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83467 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83467 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H6yxRCG5gN 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H6yxRCG5gN 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H6yxRCG5gN 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H6yxRCG5gN 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83504 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83504 /var/tmp/bdevperf.sock 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83504 ']' 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.927 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.927 [2024-12-06 13:15:51.415439] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:44.927 [2024-12-06 13:15:51.415565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83504 ] 00:21:45.186 [2024-12-06 13:15:51.566621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.186 [2024-12-06 13:15:51.615052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.155 13:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.155 13:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:46.155 13:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H6yxRCG5gN 00:21:46.413 13:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.670 [2024-12-06 13:15:53.010142] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.670 [2024-12-06 13:15:53.017157] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:46.670 [2024-12-06 13:15:53.017207] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:46.670 [2024-12-06 13:15:53.017262] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:46.670 [2024-12-06 13:15:53.017871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135e6a0 (107): Transport endpoint is not connected 00:21:46.670 [2024-12-06 13:15:53.018859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135e6a0 (9): Bad file descriptor 00:21:46.670 [2024-12-06 13:15:53.019858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:46.670 [2024-12-06 13:15:53.019891] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:21:46.670 [2024-12-06 13:15:53.019904] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:46.670 [2024-12-06 13:15:53.019922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:46.670 2024/12/06 13:15:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:46.670 request: 00:21:46.670 { 00:21:46.670 "method": "bdev_nvme_attach_controller", 00:21:46.670 "params": { 00:21:46.670 "name": "TLSTEST", 00:21:46.670 "trtype": "tcp", 00:21:46.670 "traddr": "10.0.0.3", 00:21:46.670 "adrfam": "ipv4", 00:21:46.670 "trsvcid": "4420", 00:21:46.670 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:46.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.670 "prchk_reftag": false, 00:21:46.670 "prchk_guard": false, 00:21:46.670 "hdgst": false, 00:21:46.670 "ddgst": false, 00:21:46.670 "psk": "key0", 00:21:46.670 "allow_unrecognized_csi": false 00:21:46.670 } 00:21:46.670 } 00:21:46.670 Got JSON-RPC error response 00:21:46.670 GoRPCClient: error on JSON-RPC call 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83504 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83504 ']' 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83504 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83504 00:21:46.670 killing process with pid 83504 00:21:46.670 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.670 00:21:46.670 Latency(us) 00:21:46.670 [2024-12-06T13:15:53.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.670 [2024-12-06T13:15:53.198Z] =================================================================================================================== 00:21:46.670 [2024-12-06T13:15:53.198Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83504' 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83504 00:21:46.670 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83504 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:46.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83556 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83556 /var/tmp/bdevperf.sock 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83556 ']' 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.928 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.928 [2024-12-06 13:15:53.270377] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:46.928 [2024-12-06 13:15:53.270513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83556 ] 00:21:46.928 [2024-12-06 13:15:53.424876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.186 [2024-12-06 13:15:53.457824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.186 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.186 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:47.186 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:47.444 [2024-12-06 13:15:53.858972] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:47.444 [2024-12-06 13:15:53.859029] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:47.444 2024/12/06 13:15:53 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:21:47.444 request: 00:21:47.444 { 00:21:47.444 "method": "keyring_file_add_key", 00:21:47.444 "params": { 00:21:47.444 "name": "key0", 00:21:47.444 "path": "" 00:21:47.444 } 00:21:47.444 } 00:21:47.444 Got JSON-RPC error response 00:21:47.444 GoRPCClient: error on JSON-RPC call 00:21:47.444 13:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:47.703 [2024-12-06 13:15:54.151129] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.703 [2024-12-06 13:15:54.151199] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:47.703 2024/12/06 13:15:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:21:47.703 request: 00:21:47.703 { 00:21:47.703 "method": "bdev_nvme_attach_controller", 00:21:47.703 "params": { 00:21:47.703 "name": "TLSTEST", 00:21:47.703 "trtype": "tcp", 00:21:47.703 "traddr": "10.0.0.3", 00:21:47.703 "adrfam": "ipv4", 00:21:47.703 "trsvcid": "4420", 00:21:47.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.703 "prchk_reftag": false, 00:21:47.703 "prchk_guard": false, 00:21:47.703 "hdgst": false, 00:21:47.703 "ddgst": false, 00:21:47.703 "psk": "key0", 00:21:47.703 "allow_unrecognized_csi": false 00:21:47.703 } 00:21:47.703 } 00:21:47.703 Got JSON-RPC error response 00:21:47.703 GoRPCClient: error on JSON-RPC call 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83556 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83556 ']' 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83556 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83556 00:21:47.703 killing process with pid 83556 00:21:47.703 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.703 00:21:47.703 Latency(us) 00:21:47.703 [2024-12-06T13:15:54.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.703 [2024-12-06T13:15:54.231Z] =================================================================================================================== 00:21:47.703 [2024-12-06T13:15:54.231Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83556' 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83556 00:21:47.703 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83556 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82904 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82904 ']' 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82904 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82904 00:21:47.962 killing process with pid 82904 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82904' 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82904 00:21:47.962 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82904 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.QttsGYeAgw 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.QttsGYeAgw 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83613 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83613 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83613 ']' 00:21:48.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.224 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.224 [2024-12-06 13:15:54.620816] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:48.224 [2024-12-06 13:15:54.620912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.483 [2024-12-06 13:15:54.768660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.483 [2024-12-06 13:15:54.800732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.483 [2024-12-06 13:15:54.800799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.483 [2024-12-06 13:15:54.800811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.483 [2024-12-06 13:15:54.800820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.483 [2024-12-06 13:15:54.800828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.483 [2024-12-06 13:15:54.801149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.QttsGYeAgw 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QttsGYeAgw 00:21:48.483 13:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:48.744 [2024-12-06 13:15:55.174752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.744 13:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.004 13:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:49.261 [2024-12-06 13:15:55.758896] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.261 [2024-12-06 13:15:55.759164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:49.261 13:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:49.827 malloc0 00:21:49.827 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.085 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:21:50.343 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QttsGYeAgw 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QttsGYeAgw 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83709 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83709 /var/tmp/bdevperf.sock 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83709 ']' 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.601 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.602 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.860 [2024-12-06 13:15:57.142082] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:50.860 [2024-12-06 13:15:57.142221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83709 ] 00:21:50.860 [2024-12-06 13:15:57.301921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.860 [2024-12-06 13:15:57.350440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.118 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.118 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:51.118 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:21:51.378 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.636 [2024-12-06 13:15:58.005317] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.636 TLSTESTn1 00:21:51.636 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:51.895 Running I/O for 10 seconds... 00:21:53.777 3848.00 IOPS, 15.03 MiB/s [2024-12-06T13:16:01.269Z] 3887.00 IOPS, 15.18 MiB/s [2024-12-06T13:16:02.239Z] 3900.33 IOPS, 15.24 MiB/s [2024-12-06T13:16:03.611Z] 3907.25 IOPS, 15.26 MiB/s [2024-12-06T13:16:04.580Z] 3912.40 IOPS, 15.28 MiB/s [2024-12-06T13:16:05.521Z] 3873.50 IOPS, 15.13 MiB/s [2024-12-06T13:16:06.476Z] 3837.29 IOPS, 14.99 MiB/s [2024-12-06T13:16:07.411Z] 3830.50 IOPS, 14.96 MiB/s [2024-12-06T13:16:08.375Z] 3813.56 IOPS, 14.90 MiB/s [2024-12-06T13:16:08.375Z] 3809.70 IOPS, 14.88 MiB/s 00:22:01.847 Latency(us) 00:22:01.847 [2024-12-06T13:16:08.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.847 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.847 Verification LBA range: start 0x0 length 0x2000 00:22:01.847 TLSTESTn1 : 10.02 3815.71 14.91 0.00 0.00 33482.65 6285.50 32648.84 00:22:01.847 [2024-12-06T13:16:08.375Z] =================================================================================================================== 00:22:01.847 [2024-12-06T13:16:08.375Z] Total : 3815.71 14.91 0.00 0.00 33482.65 6285.50 32648.84 00:22:01.847 { 00:22:01.847 "results": [ 00:22:01.847 { 00:22:01.847 "job": "TLSTESTn1", 00:22:01.847 "core_mask": "0x4", 00:22:01.847 "workload": "verify", 00:22:01.847 "status": "finished", 00:22:01.847 "verify_range": { 00:22:01.847 "start": 0, 00:22:01.847 "length": 8192 00:22:01.847 }, 00:22:01.847 "queue_depth": 128, 00:22:01.847 "io_size": 4096, 00:22:01.847 "runtime": 10.017803, 00:22:01.847 "iops": 3815.706897011251, 00:22:01.847 "mibps": 14.905105066450199, 00:22:01.847 "io_failed": 0, 00:22:01.847 "io_timeout": 0, 00:22:01.847 "avg_latency_us": 33482.65178571854, 00:22:01.847 "min_latency_us": 6285.498181818181, 00:22:01.847 "max_latency_us": 32648.843636363636 00:22:01.847 } 00:22:01.847 ], 00:22:01.847 "core_count": 1 00:22:01.847 } 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83709 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83709 ']' 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83709 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83709 00:22:01.847 killing process with pid 83709 00:22:01.847 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.847 00:22:01.847 Latency(us) 00:22:01.847 [2024-12-06T13:16:08.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.847 [2024-12-06T13:16:08.375Z] =================================================================================================================== 00:22:01.847 [2024-12-06T13:16:08.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83709' 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83709 00:22:01.847 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83709 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.QttsGYeAgw 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QttsGYeAgw 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QttsGYeAgw 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:02.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QttsGYeAgw 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QttsGYeAgw 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83856 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83856 /var/tmp/bdevperf.sock 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83856 ']' 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.105 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.105 [2024-12-06 13:16:08.497785] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:02.105 [2024-12-06 13:16:08.497884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83856 ] 00:22:02.363 [2024-12-06 13:16:08.641808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.363 [2024-12-06 13:16:08.675321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.363 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.363 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:02.363 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:22:02.620 [2024-12-06 13:16:09.044093] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QttsGYeAgw': 0100666 00:22:02.620 [2024-12-06 13:16:09.044143] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:02.620 2024/12/06 13:16:09 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.QttsGYeAgw], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:02.620 request: 00:22:02.620 { 00:22:02.620 "method": "keyring_file_add_key", 00:22:02.620 "params": { 00:22:02.620 "name": "key0", 00:22:02.620 "path": "/tmp/tmp.QttsGYeAgw" 00:22:02.620 } 00:22:02.620 } 00:22:02.620 Got JSON-RPC error response 00:22:02.620 GoRPCClient: error on JSON-RPC call 00:22:02.620 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.880 [2024-12-06 13:16:09.320255] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.880 [2024-12-06 13:16:09.320325] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:02.880 2024/12/06 13:16:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:22:02.880 request: 00:22:02.880 { 00:22:02.880 "method": "bdev_nvme_attach_controller", 00:22:02.880 "params": { 00:22:02.880 "name": "TLSTEST", 00:22:02.880 "trtype": "tcp", 00:22:02.880 "traddr": "10.0.0.3", 00:22:02.880 "adrfam": "ipv4", 00:22:02.880 "trsvcid": "4420", 00:22:02.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.880 "prchk_reftag": false, 00:22:02.880 "prchk_guard": false, 00:22:02.880 "hdgst": false, 00:22:02.880 "ddgst": false, 00:22:02.880 "psk": "key0", 00:22:02.880 "allow_unrecognized_csi": false 00:22:02.880 } 00:22:02.880 } 00:22:02.880 Got JSON-RPC error response 00:22:02.880 GoRPCClient: error on JSON-RPC call 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83856 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83856 ']' 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83856 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83856 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:02.880 killing process with pid 83856 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83856' 00:22:02.880 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.880 00:22:02.880 Latency(us) 00:22:02.880 [2024-12-06T13:16:09.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.880 [2024-12-06T13:16:09.408Z] =================================================================================================================== 00:22:02.880 [2024-12-06T13:16:09.408Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83856 00:22:02.880 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83856 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83613 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83613 ']' 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83613 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83613 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:03.139 killing process with pid 83613 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83613' 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83613 00:22:03.139 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83613 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83906 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83906 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83906 ']' 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.398 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.399 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.399 [2024-12-06 13:16:09.738275] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:03.399 [2024-12-06 13:16:09.738368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.399 [2024-12-06 13:16:09.882327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.399 [2024-12-06 13:16:09.915313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.399 [2024-12-06 13:16:09.915380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.399 [2024-12-06 13:16:09.915394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.399 [2024-12-06 13:16:09.915402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.399 [2024-12-06 13:16:09.915409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.399 [2024-12-06 13:16:09.915743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.657 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.657 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:03.657 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.657 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:03.657 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.QttsGYeAgw 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QttsGYeAgw 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.QttsGYeAgw 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QttsGYeAgw 00:22:03.657 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.915 [2024-12-06 13:16:10.299477] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.915 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.481 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:04.481 [2024-12-06 13:16:10.975622] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.481 [2024-12-06 13:16:10.975909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:04.481 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.046 malloc0 00:22:05.046 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.302 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:22:05.560 [2024-12-06 13:16:11.874710] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QttsGYeAgw': 0100666 00:22:05.560 [2024-12-06 13:16:11.874762] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:05.560 2024/12/06 13:16:11 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.QttsGYeAgw], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:05.560 request: 00:22:05.560 { 00:22:05.560 "method": "keyring_file_add_key", 00:22:05.560 "params": { 00:22:05.560 "name": "key0", 00:22:05.560 "path": "/tmp/tmp.QttsGYeAgw" 00:22:05.560 } 00:22:05.560 } 00:22:05.560 Got JSON-RPC error response 00:22:05.560 GoRPCClient: error on JSON-RPC call 00:22:05.560 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:05.819 [2024-12-06 13:16:12.210821] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:05.819 [2024-12-06 13:16:12.210907] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:05.819 2024/12/06 13:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:05.819 request: 00:22:05.819 { 00:22:05.819 "method": "nvmf_subsystem_add_host", 00:22:05.819 "params": { 00:22:05.819 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.819 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.819 "psk": "key0" 00:22:05.819 } 00:22:05.819 } 00:22:05.819 Got JSON-RPC error response 00:22:05.819 GoRPCClient: error on JSON-RPC call 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83906 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83906 ']' 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83906 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83906 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:05.819 killing process with pid 83906 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83906' 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83906 00:22:05.819 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83906 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.QttsGYeAgw 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84010 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84010 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84010 ']' 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.079 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.338 [2024-12-06 13:16:12.684477] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:06.338 [2024-12-06 13:16:12.684618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.338 [2024-12-06 13:16:12.836651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.597 [2024-12-06 13:16:12.875462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.597 [2024-12-06 13:16:12.875531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.597 [2024-12-06 13:16:12.875546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.597 [2024-12-06 13:16:12.875556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.597 [2024-12-06 13:16:12.875564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.597 [2024-12-06 13:16:12.875933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.QttsGYeAgw 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QttsGYeAgw 00:22:06.597 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.856 [2024-12-06 13:16:13.345937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.856 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:07.422 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:07.681 [2024-12-06 13:16:13.966073] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.681 [2024-12-06 13:16:13.966304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:07.681 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.941 malloc0 00:22:07.941 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:08.200 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:22:08.459 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84118 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84118 /var/tmp/bdevperf.sock 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84118 ']' 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.025 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.025 [2024-12-06 13:16:15.350198] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:09.025 [2024-12-06 13:16:15.350307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84118 ] 00:22:09.025 [2024-12-06 13:16:15.513820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.282 [2024-12-06 13:16:15.562675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.282 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.282 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:09.282 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:22:09.542 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:09.800 [2024-12-06 13:16:16.262129] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.059 TLSTESTn1 00:22:10.059 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:10.318 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:10.318 "subsystems": [ 00:22:10.318 { 00:22:10.318 "subsystem": "keyring", 00:22:10.318 "config": [ 00:22:10.318 { 00:22:10.318 "method": "keyring_file_add_key", 00:22:10.318 "params": { 00:22:10.318 "name": "key0", 00:22:10.318 "path": "/tmp/tmp.QttsGYeAgw" 00:22:10.318 } 00:22:10.318 } 00:22:10.318 ] 00:22:10.318 }, 00:22:10.318 { 00:22:10.318 "subsystem": "iobuf", 00:22:10.318 "config": [ 00:22:10.318 { 00:22:10.318 "method": "iobuf_set_options", 00:22:10.318 "params": { 00:22:10.318 "enable_numa": false, 00:22:10.318 "large_bufsize": 135168, 00:22:10.318 "large_pool_count": 1024, 00:22:10.318 "small_bufsize": 8192, 00:22:10.318 "small_pool_count": 8192 00:22:10.318 } 00:22:10.318 } 00:22:10.318 ] 00:22:10.318 }, 00:22:10.318 { 00:22:10.318 "subsystem": "sock", 00:22:10.318 "config": [ 00:22:10.318 { 00:22:10.318 "method": "sock_set_default_impl", 00:22:10.318 "params": { 00:22:10.318 "impl_name": "posix" 00:22:10.318 } 00:22:10.318 }, 00:22:10.318 { 00:22:10.318 "method": "sock_impl_set_options", 00:22:10.318 "params": { 00:22:10.318 "enable_ktls": false, 00:22:10.318 "enable_placement_id": 0, 00:22:10.318 "enable_quickack": false, 00:22:10.318 "enable_recv_pipe": true, 00:22:10.318 "enable_zerocopy_send_client": false, 00:22:10.318 "enable_zerocopy_send_server": true, 00:22:10.318 "impl_name": "ssl", 00:22:10.318 "recv_buf_size": 4096, 00:22:10.318 "send_buf_size": 4096, 00:22:10.318 "tls_version": 0, 00:22:10.318 "zerocopy_threshold": 0 00:22:10.318 } 00:22:10.318 }, 00:22:10.318 { 00:22:10.318 "method": "sock_impl_set_options", 00:22:10.318 "params": { 00:22:10.318 "enable_ktls": false, 00:22:10.318 "enable_placement_id": 0, 00:22:10.318 "enable_quickack": false, 00:22:10.318 "enable_recv_pipe": true, 00:22:10.318 "enable_zerocopy_send_client": false, 00:22:10.318 "enable_zerocopy_send_server": true, 00:22:10.318 "impl_name": "posix", 00:22:10.318 "recv_buf_size": 2097152, 00:22:10.318 "send_buf_size": 2097152, 00:22:10.318 "tls_version": 0, 00:22:10.318 "zerocopy_threshold": 0 00:22:10.318 } 00:22:10.318 } 00:22:10.318 ] 00:22:10.318 }, 00:22:10.318 { 00:22:10.318 "subsystem": "vmd", 00:22:10.318 "config": [] 00:22:10.318 }, 00:22:10.318 { 00:22:10.318 "subsystem": "accel", 00:22:10.318 "config": [ 00:22:10.318 { 00:22:10.318 "method": "accel_set_options", 00:22:10.318 "params": { 00:22:10.318 "buf_count": 2048, 00:22:10.318 "large_cache_size": 16, 00:22:10.318 "sequence_count": 2048, 00:22:10.318 "small_cache_size": 128, 00:22:10.318 "task_count": 2048 00:22:10.318 } 00:22:10.318 } 00:22:10.318 ] 00:22:10.318 }, 00:22:10.318 { 00:22:10.318 "subsystem": "bdev", 00:22:10.318 "config": [ 00:22:10.319 { 00:22:10.319 "method": "bdev_set_options", 00:22:10.319 "params": { 00:22:10.319 "bdev_auto_examine": true, 00:22:10.319 "bdev_io_cache_size": 256, 00:22:10.319 "bdev_io_pool_size": 65535, 00:22:10.319 "iobuf_large_cache_size": 16, 00:22:10.319 "iobuf_small_cache_size": 128 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "bdev_raid_set_options", 00:22:10.319 "params": { 00:22:10.319 "process_max_bandwidth_mb_sec": 0, 00:22:10.319 "process_window_size_kb": 1024 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "bdev_iscsi_set_options", 00:22:10.319 "params": { 00:22:10.319 "timeout_sec": 30 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "bdev_nvme_set_options", 00:22:10.319 "params": { 00:22:10.319 "action_on_timeout": "none", 00:22:10.319 "allow_accel_sequence": false, 00:22:10.319 "arbitration_burst": 0, 00:22:10.319 "bdev_retry_count": 3, 00:22:10.319 "ctrlr_loss_timeout_sec": 0, 00:22:10.319 "delay_cmd_submit": true, 00:22:10.319 "dhchap_dhgroups": [ 00:22:10.319 "null", 00:22:10.319 "ffdhe2048", 00:22:10.319 "ffdhe3072", 00:22:10.319 "ffdhe4096", 00:22:10.319 "ffdhe6144", 00:22:10.319 "ffdhe8192" 00:22:10.319 ], 00:22:10.319 "dhchap_digests": [ 00:22:10.319 "sha256", 00:22:10.319 "sha384", 00:22:10.319 "sha512" 00:22:10.319 ], 00:22:10.319 "disable_auto_failback": false, 00:22:10.319 "fast_io_fail_timeout_sec": 0, 00:22:10.319 "generate_uuids": false, 00:22:10.319 "high_priority_weight": 0, 00:22:10.319 "io_path_stat": false, 00:22:10.319 "io_queue_requests": 0, 00:22:10.319 "keep_alive_timeout_ms": 10000, 00:22:10.319 "low_priority_weight": 0, 00:22:10.319 "medium_priority_weight": 0, 00:22:10.319 "nvme_adminq_poll_period_us": 10000, 00:22:10.319 "nvme_error_stat": false, 00:22:10.319 "nvme_ioq_poll_period_us": 0, 00:22:10.319 "rdma_cm_event_timeout_ms": 0, 00:22:10.319 "rdma_max_cq_size": 0, 00:22:10.319 "rdma_srq_size": 0, 00:22:10.319 "reconnect_delay_sec": 0, 00:22:10.319 "timeout_admin_us": 0, 00:22:10.319 "timeout_us": 0, 00:22:10.319 "transport_ack_timeout": 0, 00:22:10.319 "transport_retry_count": 4, 00:22:10.319 "transport_tos": 0 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "bdev_nvme_set_hotplug", 00:22:10.319 "params": { 00:22:10.319 "enable": false, 00:22:10.319 "period_us": 100000 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "bdev_malloc_create", 00:22:10.319 "params": { 00:22:10.319 "block_size": 4096, 00:22:10.319 "dif_is_head_of_md": false, 00:22:10.319 "dif_pi_format": 0, 00:22:10.319 "dif_type": 0, 00:22:10.319 "md_size": 0, 00:22:10.319 "name": "malloc0", 00:22:10.319 "num_blocks": 8192, 00:22:10.319 "optimal_io_boundary": 0, 00:22:10.319 "physical_block_size": 4096, 00:22:10.319 "uuid": "9a371b40-62e4-4aa5-8d21-a70f70f1d461" 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "bdev_wait_for_examine" 00:22:10.319 } 00:22:10.319 ] 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "subsystem": "nbd", 00:22:10.319 "config": [] 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "subsystem": "scheduler", 00:22:10.319 "config": [ 00:22:10.319 { 00:22:10.319 "method": "framework_set_scheduler", 00:22:10.319 "params": { 00:22:10.319 "name": "static" 00:22:10.319 } 00:22:10.319 } 00:22:10.319 ] 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "subsystem": "nvmf", 00:22:10.319 "config": [ 00:22:10.319 { 00:22:10.319 "method": "nvmf_set_config", 00:22:10.319 "params": { 00:22:10.319 "admin_cmd_passthru": { 00:22:10.319 "identify_ctrlr": false 00:22:10.319 }, 00:22:10.319 "dhchap_dhgroups": [ 00:22:10.319 "null", 00:22:10.319 "ffdhe2048", 00:22:10.319 "ffdhe3072", 00:22:10.319 "ffdhe4096", 00:22:10.319 "ffdhe6144", 00:22:10.319 "ffdhe8192" 00:22:10.319 ], 00:22:10.319 "dhchap_digests": [ 00:22:10.319 "sha256", 00:22:10.319 "sha384", 00:22:10.319 "sha512" 00:22:10.319 ], 00:22:10.319 "discovery_filter": "match_any" 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "nvmf_set_max_subsystems", 00:22:10.319 "params": { 00:22:10.319 "max_subsystems": 1024 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "nvmf_set_crdt", 00:22:10.319 "params": { 00:22:10.319 "crdt1": 0, 00:22:10.319 "crdt2": 0, 00:22:10.319 "crdt3": 0 00:22:10.319 } 00:22:10.319 }, 00:22:10.319 { 00:22:10.319 "method": "nvmf_create_transport", 00:22:10.319 "params": { 00:22:10.319 "abort_timeout_sec": 1, 00:22:10.319 "ack_timeout": 0, 00:22:10.319 "buf_cache_size": 4294967295, 00:22:10.319 "c2h_success": false, 00:22:10.319 "data_wr_pool_size": 0, 00:22:10.319 "dif_insert_or_strip": false, 00:22:10.319 "in_capsule_data_size": 4096, 00:22:10.319 "io_unit_size": 131072, 00:22:10.319 "max_aq_depth": 128, 00:22:10.319 "max_io_qpairs_per_ctrlr": 127, 00:22:10.319 "max_io_size": 131072, 00:22:10.319 "max_queue_depth": 128, 00:22:10.319 "num_shared_buffers": 511, 00:22:10.319 "sock_priority": 0, 00:22:10.319 "trtype": "TCP", 00:22:10.320 "zcopy": false 00:22:10.320 } 00:22:10.320 }, 00:22:10.320 { 00:22:10.320 "method": "nvmf_create_subsystem", 00:22:10.320 "params": { 00:22:10.320 "allow_any_host": false, 00:22:10.320 "ana_reporting": false, 00:22:10.320 "max_cntlid": 65519, 00:22:10.320 "max_namespaces": 10, 00:22:10.320 "min_cntlid": 1, 00:22:10.320 "model_number": "SPDK bdev Controller", 00:22:10.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.320 "serial_number": "SPDK00000000000001" 00:22:10.320 } 00:22:10.320 }, 00:22:10.320 { 00:22:10.320 "method": "nvmf_subsystem_add_host", 00:22:10.320 "params": { 00:22:10.320 "host": "nqn.2016-06.io.spdk:host1", 00:22:10.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.320 "psk": "key0" 00:22:10.320 } 00:22:10.320 }, 00:22:10.320 { 00:22:10.320 "method": "nvmf_subsystem_add_ns", 00:22:10.320 "params": { 00:22:10.320 "namespace": { 00:22:10.320 "bdev_name": "malloc0", 00:22:10.320 "nguid": "9A371B4062E44AA58D21A70F70F1D461", 00:22:10.320 "no_auto_visible": false, 00:22:10.320 "nsid": 1, 00:22:10.320 "uuid": "9a371b40-62e4-4aa5-8d21-a70f70f1d461" 00:22:10.320 }, 00:22:10.320 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:10.320 } 00:22:10.320 }, 00:22:10.320 { 00:22:10.320 "method": "nvmf_subsystem_add_listener", 00:22:10.320 "params": { 00:22:10.320 "listen_address": { 00:22:10.320 "adrfam": "IPv4", 00:22:10.320 "traddr": "10.0.0.3", 00:22:10.320 "trsvcid": "4420", 00:22:10.320 "trtype": "TCP" 00:22:10.320 }, 00:22:10.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.320 "secure_channel": true 00:22:10.320 } 00:22:10.320 } 00:22:10.320 ] 00:22:10.320 } 00:22:10.320 ] 00:22:10.320 }' 00:22:10.320 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:10.579 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:10.579 "subsystems": [ 00:22:10.579 { 00:22:10.579 "subsystem": "keyring", 00:22:10.579 "config": [ 00:22:10.579 { 00:22:10.579 "method": "keyring_file_add_key", 00:22:10.579 "params": { 00:22:10.579 "name": "key0", 00:22:10.579 "path": "/tmp/tmp.QttsGYeAgw" 00:22:10.579 } 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "subsystem": "iobuf", 00:22:10.579 "config": [ 00:22:10.579 { 00:22:10.579 "method": "iobuf_set_options", 00:22:10.579 "params": { 00:22:10.579 "enable_numa": false, 00:22:10.579 "large_bufsize": 135168, 00:22:10.579 "large_pool_count": 1024, 00:22:10.579 "small_bufsize": 8192, 00:22:10.579 "small_pool_count": 8192 00:22:10.579 } 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "subsystem": "sock", 00:22:10.579 "config": [ 00:22:10.579 { 00:22:10.579 "method": "sock_set_default_impl", 00:22:10.579 "params": { 00:22:10.579 "impl_name": "posix" 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "sock_impl_set_options", 00:22:10.579 "params": { 00:22:10.579 "enable_ktls": false, 00:22:10.579 "enable_placement_id": 0, 00:22:10.579 "enable_quickack": false, 00:22:10.579 "enable_recv_pipe": true, 00:22:10.579 "enable_zerocopy_send_client": false, 00:22:10.579 "enable_zerocopy_send_server": true, 00:22:10.579 "impl_name": "ssl", 00:22:10.579 "recv_buf_size": 4096, 00:22:10.579 "send_buf_size": 4096, 00:22:10.579 "tls_version": 0, 00:22:10.579 "zerocopy_threshold": 0 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "sock_impl_set_options", 00:22:10.579 "params": { 00:22:10.579 "enable_ktls": false, 00:22:10.579 "enable_placement_id": 0, 00:22:10.579 "enable_quickack": false, 00:22:10.579 "enable_recv_pipe": true, 00:22:10.579 "enable_zerocopy_send_client": false, 00:22:10.579 "enable_zerocopy_send_server": true, 00:22:10.579 "impl_name": "posix", 00:22:10.579 "recv_buf_size": 2097152, 00:22:10.579 "send_buf_size": 2097152, 00:22:10.579 "tls_version": 0, 00:22:10.579 "zerocopy_threshold": 0 00:22:10.579 } 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "subsystem": "vmd", 00:22:10.579 "config": [] 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "subsystem": "accel", 00:22:10.579 "config": [ 00:22:10.579 { 00:22:10.579 "method": "accel_set_options", 00:22:10.579 "params": { 00:22:10.579 "buf_count": 2048, 00:22:10.579 "large_cache_size": 16, 00:22:10.579 "sequence_count": 2048, 00:22:10.579 "small_cache_size": 128, 00:22:10.579 "task_count": 2048 00:22:10.579 } 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "subsystem": "bdev", 00:22:10.579 "config": [ 00:22:10.579 { 00:22:10.579 "method": "bdev_set_options", 00:22:10.579 "params": { 00:22:10.579 "bdev_auto_examine": true, 00:22:10.579 "bdev_io_cache_size": 256, 00:22:10.579 "bdev_io_pool_size": 65535, 00:22:10.579 "iobuf_large_cache_size": 16, 00:22:10.579 "iobuf_small_cache_size": 128 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "bdev_raid_set_options", 00:22:10.579 "params": { 00:22:10.579 "process_max_bandwidth_mb_sec": 0, 00:22:10.579 "process_window_size_kb": 1024 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "bdev_iscsi_set_options", 00:22:10.579 "params": { 00:22:10.579 "timeout_sec": 30 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "bdev_nvme_set_options", 00:22:10.579 "params": { 00:22:10.579 "action_on_timeout": "none", 00:22:10.579 "allow_accel_sequence": false, 00:22:10.579 "arbitration_burst": 0, 00:22:10.579 "bdev_retry_count": 3, 00:22:10.579 "ctrlr_loss_timeout_sec": 0, 00:22:10.579 "delay_cmd_submit": true, 00:22:10.579 "dhchap_dhgroups": [ 00:22:10.579 "null", 00:22:10.579 "ffdhe2048", 00:22:10.579 "ffdhe3072", 00:22:10.579 "ffdhe4096", 00:22:10.579 "ffdhe6144", 00:22:10.579 "ffdhe8192" 00:22:10.579 ], 00:22:10.579 "dhchap_digests": [ 00:22:10.579 "sha256", 00:22:10.579 "sha384", 00:22:10.579 "sha512" 00:22:10.579 ], 00:22:10.579 "disable_auto_failback": false, 00:22:10.579 "fast_io_fail_timeout_sec": 0, 00:22:10.579 "generate_uuids": false, 00:22:10.579 "high_priority_weight": 0, 00:22:10.579 "io_path_stat": false, 00:22:10.579 "io_queue_requests": 512, 00:22:10.579 "keep_alive_timeout_ms": 10000, 00:22:10.579 "low_priority_weight": 0, 00:22:10.579 "medium_priority_weight": 0, 00:22:10.579 "nvme_adminq_poll_period_us": 10000, 00:22:10.579 "nvme_error_stat": false, 00:22:10.579 "nvme_ioq_poll_period_us": 0, 00:22:10.579 "rdma_cm_event_timeout_ms": 0, 00:22:10.579 "rdma_max_cq_size": 0, 00:22:10.579 "rdma_srq_size": 0, 00:22:10.579 "reconnect_delay_sec": 0, 00:22:10.579 "timeout_admin_us": 0, 00:22:10.579 "timeout_us": 0, 00:22:10.579 "transport_ack_timeout": 0, 00:22:10.579 "transport_retry_count": 4, 00:22:10.579 "transport_tos": 0 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "bdev_nvme_attach_controller", 00:22:10.579 "params": { 00:22:10.579 "adrfam": "IPv4", 00:22:10.579 "ctrlr_loss_timeout_sec": 0, 00:22:10.579 "ddgst": false, 00:22:10.579 "fast_io_fail_timeout_sec": 0, 00:22:10.579 "hdgst": false, 00:22:10.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.579 "multipath": "multipath", 00:22:10.579 "name": "TLSTEST", 00:22:10.579 "prchk_guard": false, 00:22:10.579 "prchk_reftag": false, 00:22:10.579 "psk": "key0", 00:22:10.579 "reconnect_delay_sec": 0, 00:22:10.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.579 "traddr": "10.0.0.3", 00:22:10.579 "trsvcid": "4420", 00:22:10.579 "trtype": "TCP" 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "bdev_nvme_set_hotplug", 00:22:10.579 "params": { 00:22:10.579 "enable": false, 00:22:10.579 "period_us": 100000 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "bdev_wait_for_examine" 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "subsystem": "nbd", 00:22:10.579 "config": [] 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 }' 00:22:10.579 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84118 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84118 ']' 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84118 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84118 00:22:10.580 killing process with pid 84118 00:22:10.580 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.580 00:22:10.580 Latency(us) 00:22:10.580 [2024-12-06T13:16:17.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.580 [2024-12-06T13:16:17.108Z] =================================================================================================================== 00:22:10.580 [2024-12-06T13:16:17.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84118' 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84118 00:22:10.580 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84118 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84010 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84010 ']' 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84010 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84010 00:22:10.838 killing process with pid 84010 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84010' 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84010 00:22:10.838 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84010 00:22:11.097 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:11.097 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.097 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.097 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:11.097 "subsystems": [ 00:22:11.097 { 00:22:11.098 "subsystem": "keyring", 00:22:11.098 "config": [ 00:22:11.098 { 00:22:11.098 "method": "keyring_file_add_key", 00:22:11.098 "params": { 00:22:11.098 "name": "key0", 00:22:11.098 "path": "/tmp/tmp.QttsGYeAgw" 00:22:11.098 } 00:22:11.098 } 00:22:11.098 ] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "iobuf", 00:22:11.098 "config": [ 00:22:11.098 { 00:22:11.098 "method": "iobuf_set_options", 00:22:11.098 "params": { 00:22:11.098 "enable_numa": false, 00:22:11.098 "large_bufsize": 135168, 00:22:11.098 "large_pool_count": 1024, 00:22:11.098 "small_bufsize": 8192, 00:22:11.098 "small_pool_count": 8192 00:22:11.098 } 00:22:11.098 } 00:22:11.098 ] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "sock", 00:22:11.098 "config": [ 00:22:11.098 { 00:22:11.098 "method": "sock_set_default_impl", 00:22:11.098 "params": { 00:22:11.098 "impl_name": "posix" 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "sock_impl_set_options", 00:22:11.098 "params": { 00:22:11.098 "enable_ktls": false, 00:22:11.098 "enable_placement_id": 0, 00:22:11.098 "enable_quickack": false, 00:22:11.098 "enable_recv_pipe": true, 00:22:11.098 "enable_zerocopy_send_client": false, 00:22:11.098 "enable_zerocopy_send_server": true, 00:22:11.098 "impl_name": "ssl", 00:22:11.098 "recv_buf_size": 4096, 00:22:11.098 "send_buf_size": 4096, 00:22:11.098 "tls_version": 0, 00:22:11.098 "zerocopy_threshold": 0 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "sock_impl_set_options", 00:22:11.098 "params": { 00:22:11.098 "enable_ktls": false, 00:22:11.098 "enable_placement_id": 0, 00:22:11.098 "enable_quickack": false, 00:22:11.098 "enable_recv_pipe": true, 00:22:11.098 "enable_zerocopy_send_client": false, 00:22:11.098 "enable_zerocopy_send_server": true, 00:22:11.098 "impl_name": "posix", 00:22:11.098 "recv_buf_size": 2097152, 00:22:11.098 "send_buf_size": 2097152, 00:22:11.098 "tls_version": 0, 00:22:11.098 "zerocopy_threshold": 0 00:22:11.098 } 00:22:11.098 } 00:22:11.098 ] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "vmd", 00:22:11.098 "config": [] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "accel", 00:22:11.098 "config": [ 00:22:11.098 { 00:22:11.098 "method": "accel_set_options", 00:22:11.098 "params": { 00:22:11.098 "buf_count": 2048, 00:22:11.098 "large_cache_size": 16, 00:22:11.098 "sequence_count": 2048, 00:22:11.098 "small_cache_size": 128, 00:22:11.098 "task_count": 2048 00:22:11.098 } 00:22:11.098 } 00:22:11.098 ] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "bdev", 00:22:11.098 "config": [ 00:22:11.098 { 00:22:11.098 "method": "bdev_set_options", 00:22:11.098 "params": { 00:22:11.098 "bdev_auto_examine": true, 00:22:11.098 "bdev_io_cache_size": 256, 00:22:11.098 "bdev_io_pool_size": 65535, 00:22:11.098 "iobuf_large_cache_size": 16, 00:22:11.098 "iobuf_small_cache_size": 128 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "bdev_raid_set_options", 00:22:11.098 "params": { 00:22:11.098 "process_max_bandwidth_mb_sec": 0, 00:22:11.098 "process_window_size_kb": 1024 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "bdev_iscsi_set_options", 00:22:11.098 "params": { 00:22:11.098 "timeout_sec": 30 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "bdev_nvme_set_options", 00:22:11.098 "params": { 00:22:11.098 "action_on_timeout": "none", 00:22:11.098 "allow_accel_sequence": false, 00:22:11.098 "arbitration_burst": 0, 00:22:11.098 "bdev_retry_count": 3, 00:22:11.098 "ctrlr_loss_timeout_sec": 0, 00:22:11.098 "delay_cmd_submit": true, 00:22:11.098 "dhchap_dhgroups": [ 00:22:11.098 "null", 00:22:11.098 "ffdhe2048", 00:22:11.098 "ffdhe3072", 00:22:11.098 "ffdhe4096", 00:22:11.098 "ffdhe6144", 00:22:11.098 "ffdhe8192" 00:22:11.098 ], 00:22:11.098 "dhchap_digests": [ 00:22:11.098 "sha256", 00:22:11.098 "sha384", 00:22:11.098 "sha512" 00:22:11.098 ], 00:22:11.098 "disable_auto_failback": false, 00:22:11.098 "fast_io_fail_timeout_sec": 0, 00:22:11.098 "generate_uuids": false, 00:22:11.098 "high_priority_weight": 0, 00:22:11.098 "io_path_stat": false, 00:22:11.098 "io_queue_requests": 0, 00:22:11.098 "keep_alive_timeout_ms": 10000, 00:22:11.098 "low_priority_weight": 0, 00:22:11.098 "medium_priority_weight": 0, 00:22:11.098 "nvme_adminq_poll_period_us": 10000, 00:22:11.098 "nvme_error_stat": false, 00:22:11.098 "nvme_ioq_poll_period_us": 0, 00:22:11.098 "rdma_cm_event_timeout_ms": 0, 00:22:11.098 "rdma_max_cq_size": 0, 00:22:11.098 "rdma_srq_size": 0, 00:22:11.098 "reconnect_delay_sec": 0, 00:22:11.098 "timeout_admin_us": 0, 00:22:11.098 "timeout_us": 0, 00:22:11.098 "transport_ack_timeout": 0, 00:22:11.098 "transport_retry_count": 4, 00:22:11.098 "transport_tos": 0 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "bdev_nvme_set_hotplug", 00:22:11.098 "params": { 00:22:11.098 "enable": false, 00:22:11.098 "period_us": 100000 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "bdev_malloc_create", 00:22:11.098 "params": { 00:22:11.098 "block_size": 4096, 00:22:11.098 "dif_is_head_of_md": false, 00:22:11.098 "dif_pi_format": 0, 00:22:11.098 "dif_type": 0, 00:22:11.098 "md_size": 0, 00:22:11.098 "name": "malloc0", 00:22:11.098 "num_blocks": 8192, 00:22:11.098 "optimal_io_boundary": 0, 00:22:11.098 "physical_block_size": 4096, 00:22:11.098 "uuid": "9a371b40-62e4-4aa5-8d21-a70f70f1d461" 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "bdev_wait_for_examine" 00:22:11.098 } 00:22:11.098 ] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "nbd", 00:22:11.098 "config": [] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "scheduler", 00:22:11.098 "config": [ 00:22:11.098 { 00:22:11.098 "method": "framework_set_scheduler", 00:22:11.098 "params": { 00:22:11.098 "name": "static" 00:22:11.098 } 00:22:11.098 } 00:22:11.098 ] 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "subsystem": "nvmf", 00:22:11.098 "config": [ 00:22:11.098 { 00:22:11.098 "method": "nvmf_set_config", 00:22:11.098 "params": { 00:22:11.098 "admin_cmd_passthru": { 00:22:11.098 "identify_ctrlr": false 00:22:11.098 }, 00:22:11.098 "dhchap_dhgroups": [ 00:22:11.098 "null", 00:22:11.098 "ffdhe2048", 00:22:11.098 "ffdhe3072", 00:22:11.098 "ffdhe4096", 00:22:11.098 "ffdhe6144", 00:22:11.098 "ffdhe8192" 00:22:11.098 ], 00:22:11.098 "dhchap_digests": [ 00:22:11.098 "sha256", 00:22:11.098 "sha384", 00:22:11.098 "sha512" 00:22:11.098 ], 00:22:11.098 "discovery_filter": "match_any" 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "nvmf_set_max_subsystems", 00:22:11.098 "params": { 00:22:11.098 "max_subsystems": 1024 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "nvmf_set_crdt", 00:22:11.098 "params": { 00:22:11.098 "crdt1": 0, 00:22:11.098 "crdt2": 0, 00:22:11.098 "crdt3": 0 00:22:11.098 } 00:22:11.098 }, 00:22:11.098 { 00:22:11.098 "method": "nvmf_create_transport", 00:22:11.098 "params": { 00:22:11.098 "abort_timeout_sec": 1, 00:22:11.098 "ack_timeout": 0, 00:22:11.099 "buf_cache_size": 4294967295, 00:22:11.099 "c2h_success": false, 00:22:11.099 "data_wr_pool_size": 0, 00:22:11.099 "dif_insert_or_strip": false, 00:22:11.099 "in_capsule_data_size": 4096, 00:22:11.099 "io_unit_size": 131072, 00:22:11.099 "max_aq_depth": 128, 00:22:11.099 "max_io_qpairs_per_ctrlr": 127, 00:22:11.099 "max_io_size": 131072, 00:22:11.099 "max_queue_depth": 128, 00:22:11.099 "num_shared_buffers": 511, 00:22:11.099 "sock_priority": 0, 00:22:11.099 "trtype": "TCP", 00:22:11.099 "zcopy": false 00:22:11.099 } 00:22:11.099 }, 00:22:11.099 { 00:22:11.099 "method": "nvmf_create_subsystem", 00:22:11.099 "params": { 00:22:11.099 "allow_any_host": false, 00:22:11.099 "ana_reporting": false, 00:22:11.099 "max_cntlid": 65519, 00:22:11.099 "max_namespaces": 10, 00:22:11.099 "min_cntlid": 1, 00:22:11.099 "model_number": "SPDK bdev Controller", 00:22:11.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.099 "serial_number": "SPDK00000000000001" 00:22:11.099 } 00:22:11.099 }, 00:22:11.099 { 00:22:11.099 "method": "nvmf_subsystem_add_host", 00:22:11.099 "params": { 00:22:11.099 "host": "nqn.2016-06.io.spdk:host1", 00:22:11.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.099 "psk": "key0" 00:22:11.099 } 00:22:11.099 }, 00:22:11.099 { 00:22:11.099 "method": "nvmf_subsystem_add_ns", 00:22:11.099 "params": { 00:22:11.099 "namespace": { 00:22:11.099 "bdev_name": "malloc0", 00:22:11.099 "nguid": "9A371B4062E44AA58D21A70F70F1D461", 00:22:11.099 "no_auto_visible": false, 00:22:11.099 "nsid": 1, 00:22:11.099 "uuid": "9a371b40-62e4-4aa5-8d21-a70f70f1d461" 00:22:11.099 }, 00:22:11.099 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:11.099 } 00:22:11.099 }, 00:22:11.099 { 00:22:11.099 "method": "nvmf_subsystem_add_listener", 00:22:11.099 "params": { 00:22:11.099 "listen_address": { 00:22:11.099 "adrfam": "IPv4", 00:22:11.099 "traddr": "10.0.0.3", 00:22:11.099 "trsvcid": "4420", 00:22:11.099 "trtype": "TCP" 00:22:11.099 }, 00:22:11.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.099 "secure_channel": true 00:22:11.099 } 00:22:11.099 } 00:22:11.099 ] 00:22:11.099 } 00:22:11.099 ] 00:22:11.099 }' 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84191 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84191 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84191 ']' 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.099 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.099 [2024-12-06 13:16:17.477578] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:11.099 [2024-12-06 13:16:17.477708] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.358 [2024-12-06 13:16:17.632037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.358 [2024-12-06 13:16:17.664067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.358 [2024-12-06 13:16:17.664121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.358 [2024-12-06 13:16:17.664132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.358 [2024-12-06 13:16:17.664140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.358 [2024-12-06 13:16:17.664148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.358 [2024-12-06 13:16:17.664509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.358 [2024-12-06 13:16:17.859851] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.616 [2024-12-06 13:16:17.891809] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.616 [2024-12-06 13:16:17.892057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84235 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84235 /var/tmp/bdevperf.sock 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84235 ']' 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:12.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:12.183 "subsystems": [ 00:22:12.183 { 00:22:12.183 "subsystem": "keyring", 00:22:12.183 "config": [ 00:22:12.183 { 00:22:12.183 "method": "keyring_file_add_key", 00:22:12.183 "params": { 00:22:12.183 "name": "key0", 00:22:12.183 "path": "/tmp/tmp.QttsGYeAgw" 00:22:12.183 } 00:22:12.183 } 00:22:12.183 ] 00:22:12.183 }, 00:22:12.183 { 00:22:12.183 "subsystem": "iobuf", 00:22:12.183 "config": [ 00:22:12.183 { 00:22:12.183 "method": "iobuf_set_options", 00:22:12.183 "params": { 00:22:12.183 "enable_numa": false, 00:22:12.183 "large_bufsize": 135168, 00:22:12.183 "large_pool_count": 1024, 00:22:12.184 "small_bufsize": 8192, 00:22:12.184 "small_pool_count": 8192 00:22:12.184 } 00:22:12.184 } 00:22:12.184 ] 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "subsystem": "sock", 00:22:12.184 "config": [ 00:22:12.184 { 00:22:12.184 "method": "sock_set_default_impl", 00:22:12.184 "params": { 00:22:12.184 "impl_name": "posix" 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "sock_impl_set_options", 00:22:12.184 "params": { 00:22:12.184 "enable_ktls": false, 00:22:12.184 "enable_placement_id": 0, 00:22:12.184 "enable_quickack": false, 00:22:12.184 "enable_recv_pipe": true, 00:22:12.184 "enable_zerocopy_send_client": false, 00:22:12.184 "enable_zerocopy_send_server": true, 00:22:12.184 "impl_name": "ssl", 00:22:12.184 "recv_buf_size": 4096, 00:22:12.184 "send_buf_size": 4096, 00:22:12.184 "tls_version": 0, 00:22:12.184 "zerocopy_threshold": 0 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "sock_impl_set_options", 00:22:12.184 "params": { 00:22:12.184 "enable_ktls": false, 00:22:12.184 "enable_placement_id": 0, 00:22:12.184 "enable_quickack": false, 00:22:12.184 "enable_recv_pipe": true, 00:22:12.184 "enable_zerocopy_send_client": false, 00:22:12.184 "enable_zerocopy_send_server": true, 00:22:12.184 "impl_name": "posix", 00:22:12.184 "recv_buf_size": 2097152, 00:22:12.184 "send_buf_size": 2097152, 00:22:12.184 "tls_version": 0, 00:22:12.184 "zerocopy_threshold": 0 00:22:12.184 } 00:22:12.184 } 00:22:12.184 ] 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "subsystem": "vmd", 00:22:12.184 "config": [] 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "subsystem": "accel", 00:22:12.184 "config": [ 00:22:12.184 { 00:22:12.184 "method": "accel_set_options", 00:22:12.184 "params": { 00:22:12.184 "buf_count": 2048, 00:22:12.184 "large_cache_size": 16, 00:22:12.184 "sequence_count": 2048, 00:22:12.184 "small_cache_size": 128, 00:22:12.184 "task_count": 2048 00:22:12.184 } 00:22:12.184 } 00:22:12.184 ] 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "subsystem": "bdev", 00:22:12.184 "config": [ 00:22:12.184 { 00:22:12.184 "method": "bdev_set_options", 00:22:12.184 "params": { 00:22:12.184 "bdev_auto_examine": true, 00:22:12.184 "bdev_io_cache_size": 256, 00:22:12.184 "bdev_io_pool_size": 65535, 00:22:12.184 "iobuf_large_cache_size": 16, 00:22:12.184 "iobuf_small_cache_size": 128 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "bdev_raid_set_options", 00:22:12.184 "params": { 00:22:12.184 "process_max_bandwidth_mb_sec": 0, 00:22:12.184 "process_window_size_kb": 1024 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "bdev_iscsi_set_options", 00:22:12.184 "params": { 00:22:12.184 "timeout_sec": 30 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "bdev_nvme_set_options", 00:22:12.184 "params": { 00:22:12.184 "action_on_timeout": "none", 00:22:12.184 "allow_accel_sequence": false, 00:22:12.184 "arbitration_burst": 0, 00:22:12.184 "bdev_retry_count": 3, 00:22:12.184 "ctrlr_loss_timeout_sec": 0, 00:22:12.184 "delay_cmd_submit": true, 00:22:12.184 "dhchap_dhgroups": [ 00:22:12.184 "null", 00:22:12.184 "ffdhe2048", 00:22:12.184 "ffdhe3072", 00:22:12.184 "ffdhe4096", 00:22:12.184 "ffdhe6144", 00:22:12.184 "ffdhe8192" 00:22:12.184 ], 00:22:12.184 "dhchap_digests": [ 00:22:12.184 "sha256", 00:22:12.184 "sha384", 00:22:12.184 "sha512" 00:22:12.184 ], 00:22:12.184 "disable_auto_failback": false, 00:22:12.184 "fast_io_fail_timeout_sec": 0, 00:22:12.184 "generate_uuids": false, 00:22:12.184 "high_priority_weight": 0, 00:22:12.184 "io_path_stat": false, 00:22:12.184 "io_queue_requests": 512, 00:22:12.184 "keep_alive_timeout_ms": 10000, 00:22:12.184 "low_priority_weight": 0, 00:22:12.184 "medium_priority_weight": 0, 00:22:12.184 "nvme_adminq_poll_period_us": 10000, 00:22:12.184 "nvme_error_stat": false, 00:22:12.184 "nvme_ioq_poll_period_us": 0, 00:22:12.184 "rdma_cm_event_timeout_ms": 0, 00:22:12.184 "rdma_max_cq_size": 0, 00:22:12.184 "rdma_srq_size": 0, 00:22:12.184 "reconnect_delay_sec": 0, 00:22:12.184 "timeout_admin_us": 0, 00:22:12.184 "timeout_us": 0, 00:22:12.184 "transport_ack_timeout": 0, 00:22:12.184 "transport_retry_count": 4, 00:22:12.184 "transport_tos": 0 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "bdev_nvme_attach_controller", 00:22:12.184 "params": { 00:22:12.184 "adrfam": "IPv4", 00:22:12.184 "ctrlr_loss_timeout_sec": 0, 00:22:12.184 "ddgst": false, 00:22:12.184 "fast_io_fail_timeout_sec": 0, 00:22:12.184 "hdgst": false, 00:22:12.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.184 "multipath": "multipath", 00:22:12.184 "name": "TLSTEST", 00:22:12.184 "prchk_guard": false, 00:22:12.184 "prchk_reftag": false, 00:22:12.184 "psk": "key0", 00:22:12.184 "reconnect_delay_sec": 0, 00:22:12.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.184 "traddr": "10.0.0.3", 00:22:12.184 "trsvcid": "4420", 00:22:12.184 "trtype": "TCP" 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "bdev_nvme_set_hotplug", 00:22:12.184 "params": { 00:22:12.184 "enable": false, 00:22:12.184 "period_us": 100000 00:22:12.184 } 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "method": "bdev_wait_for_examine" 00:22:12.184 } 00:22:12.184 ] 00:22:12.184 }, 00:22:12.184 { 00:22:12.184 "subsystem": "nbd", 00:22:12.184 "config": [] 00:22:12.184 } 00:22:12.184 ] 00:22:12.184 }' 00:22:12.184 [2024-12-06 13:16:18.569871] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:12.184 [2024-12-06 13:16:18.570505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84235 ] 00:22:12.442 [2024-12-06 13:16:18.720757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.442 [2024-12-06 13:16:18.753718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.442 [2024-12-06 13:16:18.889783] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.377 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.377 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:13.377 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:13.377 Running I/O for 10 seconds... 00:22:15.244 3840.00 IOPS, 15.00 MiB/s [2024-12-06T13:16:23.146Z] 3648.00 IOPS, 14.25 MiB/s [2024-12-06T13:16:24.081Z] 3751.33 IOPS, 14.65 MiB/s [2024-12-06T13:16:25.014Z] 3748.50 IOPS, 14.64 MiB/s [2024-12-06T13:16:26.002Z] 3670.20 IOPS, 14.34 MiB/s [2024-12-06T13:16:26.936Z] 3631.17 IOPS, 14.18 MiB/s [2024-12-06T13:16:27.870Z] 3554.71 IOPS, 13.89 MiB/s [2024-12-06T13:16:28.805Z] 3546.62 IOPS, 13.85 MiB/s [2024-12-06T13:16:30.181Z] 3549.22 IOPS, 13.86 MiB/s [2024-12-06T13:16:30.181Z] 3590.60 IOPS, 14.03 MiB/s 00:22:23.653 Latency(us) 00:22:23.653 [2024-12-06T13:16:30.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.653 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.653 Verification LBA range: start 0x0 length 0x2000 00:22:23.653 TLSTESTn1 : 10.02 3597.08 14.05 0.00 0.00 35521.75 5868.45 33125.47 00:22:23.653 [2024-12-06T13:16:30.181Z] =================================================================================================================== 00:22:23.653 [2024-12-06T13:16:30.181Z] Total : 3597.08 14.05 0.00 0.00 35521.75 5868.45 33125.47 00:22:23.653 { 00:22:23.653 "results": [ 00:22:23.653 { 00:22:23.653 "job": "TLSTESTn1", 00:22:23.653 "core_mask": "0x4", 00:22:23.653 "workload": "verify", 00:22:23.653 "status": "finished", 00:22:23.653 "verify_range": { 00:22:23.653 "start": 0, 00:22:23.653 "length": 8192 00:22:23.653 }, 00:22:23.653 "queue_depth": 128, 00:22:23.653 "io_size": 4096, 00:22:23.653 "runtime": 10.016446, 00:22:23.653 "iops": 3597.0842352666805, 00:22:23.653 "mibps": 14.05111029401047, 00:22:23.653 "io_failed": 0, 00:22:23.653 "io_timeout": 0, 00:22:23.653 "avg_latency_us": 35521.7463911387, 00:22:23.653 "min_latency_us": 5868.450909090909, 00:22:23.653 "max_latency_us": 33125.46909090909 00:22:23.653 } 00:22:23.653 ], 00:22:23.653 "core_count": 1 00:22:23.653 } 00:22:23.653 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.653 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84235 00:22:23.653 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84235 ']' 00:22:23.653 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84235 00:22:23.653 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:23.653 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.653 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84235 00:22:23.653 killing process with pid 84235 00:22:23.653 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.653 00:22:23.653 Latency(us) 00:22:23.653 [2024-12-06T13:16:30.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.653 [2024-12-06T13:16:30.181Z] =================================================================================================================== 00:22:23.653 [2024-12-06T13:16:30.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84235' 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84235 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84235 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84191 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84191 ']' 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84191 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84191 00:22:23.654 killing process with pid 84191 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.654 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84191' 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84191 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84191 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84387 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84387 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84387 ']' 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.654 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.913 [2024-12-06 13:16:30.214329] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:23.913 [2024-12-06 13:16:30.214452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.913 [2024-12-06 13:16:30.364100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.913 [2024-12-06 13:16:30.404065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.913 [2024-12-06 13:16:30.404125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.913 [2024-12-06 13:16:30.404138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.913 [2024-12-06 13:16:30.404149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.913 [2024-12-06 13:16:30.404158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.913 [2024-12-06 13:16:30.404528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.QttsGYeAgw 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QttsGYeAgw 00:22:24.171 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:24.430 [2024-12-06 13:16:30.849762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.430 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:24.688 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:24.946 [2024-12-06 13:16:31.461890] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.946 [2024-12-06 13:16:31.462168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:25.205 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:25.464 malloc0 00:22:25.464 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.031 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:22:26.294 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.553 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84483 00:22:26.553 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84483 /var/tmp/bdevperf.sock 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84483 ']' 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.554 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.554 [2024-12-06 13:16:32.929009] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:26.554 [2024-12-06 13:16:32.929139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84483 ] 00:22:26.813 [2024-12-06 13:16:33.081623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.813 [2024-12-06 13:16:33.121547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.813 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.813 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:26.813 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:22:27.072 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:27.333 [2024-12-06 13:16:33.821936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.593 nvme0n1 00:22:27.593 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.593 Running I/O for 1 seconds... 00:22:28.788 3815.00 IOPS, 14.90 MiB/s 00:22:28.788 Latency(us) 00:22:28.788 [2024-12-06T13:16:35.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.788 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:28.788 Verification LBA range: start 0x0 length 0x2000 00:22:28.788 nvme0n1 : 1.02 3872.57 15.13 0.00 0.00 32737.35 6255.71 28835.84 00:22:28.788 [2024-12-06T13:16:35.316Z] =================================================================================================================== 00:22:28.788 [2024-12-06T13:16:35.316Z] Total : 3872.57 15.13 0.00 0.00 32737.35 6255.71 28835.84 00:22:28.788 { 00:22:28.788 "results": [ 00:22:28.788 { 00:22:28.788 "job": "nvme0n1", 00:22:28.788 "core_mask": "0x2", 00:22:28.788 "workload": "verify", 00:22:28.788 "status": "finished", 00:22:28.788 "verify_range": { 00:22:28.788 "start": 0, 00:22:28.788 "length": 8192 00:22:28.788 }, 00:22:28.788 "queue_depth": 128, 00:22:28.788 "io_size": 4096, 00:22:28.788 "runtime": 1.018187, 00:22:28.788 "iops": 3872.5695771012593, 00:22:28.788 "mibps": 15.127224910551794, 00:22:28.788 "io_failed": 0, 00:22:28.788 "io_timeout": 0, 00:22:28.788 "avg_latency_us": 32737.34964701542, 00:22:28.788 "min_latency_us": 6255.709090909091, 00:22:28.788 "max_latency_us": 28835.84 00:22:28.788 } 00:22:28.788 ], 00:22:28.788 "core_count": 1 00:22:28.788 } 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84483 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84483 ']' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84483 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84483 00:22:28.788 killing process with pid 84483 00:22:28.788 Received shutdown signal, test time was about 1.000000 seconds 00:22:28.788 00:22:28.788 Latency(us) 00:22:28.788 [2024-12-06T13:16:35.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.788 [2024-12-06T13:16:35.316Z] =================================================================================================================== 00:22:28.788 [2024-12-06T13:16:35.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84483' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84483 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84483 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84387 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84387 ']' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84387 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84387 00:22:28.788 killing process with pid 84387 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84387' 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84387 00:22:28.788 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84387 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84549 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84549 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84549 ']' 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.045 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.045 [2024-12-06 13:16:35.489799] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:29.045 [2024-12-06 13:16:35.490763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.303 [2024-12-06 13:16:35.639174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.303 [2024-12-06 13:16:35.671540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.303 [2024-12-06 13:16:35.671601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.303 [2024-12-06 13:16:35.671612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.303 [2024-12-06 13:16:35.671620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.303 [2024-12-06 13:16:35.671627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.303 [2024-12-06 13:16:35.671965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.303 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.303 [2024-12-06 13:16:35.806805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.303 malloc0 00:22:29.561 [2024-12-06 13:16:35.834056] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.561 [2024-12-06 13:16:35.834283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84581 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84581 /var/tmp/bdevperf.sock 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84581 ']' 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.561 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.561 [2024-12-06 13:16:35.922547] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:29.561 [2024-12-06 13:16:35.922653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84581 ] 00:22:29.561 [2024-12-06 13:16:36.069730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.818 [2024-12-06 13:16:36.123370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.818 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.819 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.819 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QttsGYeAgw 00:22:30.076 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:30.334 [2024-12-06 13:16:36.830511] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.592 nvme0n1 00:22:30.592 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.592 Running I/O for 1 seconds... 00:22:31.781 3888.00 IOPS, 15.19 MiB/s 00:22:31.781 Latency(us) 00:22:31.781 [2024-12-06T13:16:38.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.781 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:31.781 Verification LBA range: start 0x0 length 0x2000 00:22:31.781 nvme0n1 : 1.02 3951.31 15.43 0.00 0.00 32079.68 5808.87 25499.46 00:22:31.781 [2024-12-06T13:16:38.309Z] =================================================================================================================== 00:22:31.781 [2024-12-06T13:16:38.309Z] Total : 3951.31 15.43 0.00 0.00 32079.68 5808.87 25499.46 00:22:31.781 { 00:22:31.781 "results": [ 00:22:31.781 { 00:22:31.781 "job": "nvme0n1", 00:22:31.781 "core_mask": "0x2", 00:22:31.781 "workload": "verify", 00:22:31.781 "status": "finished", 00:22:31.781 "verify_range": { 00:22:31.781 "start": 0, 00:22:31.781 "length": 8192 00:22:31.781 }, 00:22:31.781 "queue_depth": 128, 00:22:31.781 "io_size": 4096, 00:22:31.782 "runtime": 1.016626, 00:22:31.782 "iops": 3951.305593207335, 00:22:31.782 "mibps": 15.434787473466152, 00:22:31.782 "io_failed": 0, 00:22:31.782 "io_timeout": 0, 00:22:31.782 "avg_latency_us": 32079.676965623374, 00:22:31.782 "min_latency_us": 5808.872727272727, 00:22:31.782 "max_latency_us": 25499.46181818182 00:22:31.782 } 00:22:31.782 ], 00:22:31.782 "core_count": 1 00:22:31.782 } 00:22:31.782 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:31.782 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.782 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.782 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.782 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:31.782 "subsystems": [ 00:22:31.782 { 00:22:31.782 "subsystem": "keyring", 00:22:31.782 "config": [ 00:22:31.782 { 00:22:31.782 "method": "keyring_file_add_key", 00:22:31.782 "params": { 00:22:31.782 "name": "key0", 00:22:31.782 "path": "/tmp/tmp.QttsGYeAgw" 00:22:31.782 } 00:22:31.782 } 00:22:31.782 ] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "iobuf", 00:22:31.782 "config": [ 00:22:31.782 { 00:22:31.782 "method": "iobuf_set_options", 00:22:31.782 "params": { 00:22:31.782 "enable_numa": false, 00:22:31.782 "large_bufsize": 135168, 00:22:31.782 "large_pool_count": 1024, 00:22:31.782 "small_bufsize": 8192, 00:22:31.782 "small_pool_count": 8192 00:22:31.782 } 00:22:31.782 } 00:22:31.782 ] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "sock", 00:22:31.782 "config": [ 00:22:31.782 { 00:22:31.782 "method": "sock_set_default_impl", 00:22:31.782 "params": { 00:22:31.782 "impl_name": "posix" 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "sock_impl_set_options", 00:22:31.782 "params": { 00:22:31.782 "enable_ktls": false, 00:22:31.782 "enable_placement_id": 0, 00:22:31.782 "enable_quickack": false, 00:22:31.782 "enable_recv_pipe": true, 00:22:31.782 "enable_zerocopy_send_client": false, 00:22:31.782 "enable_zerocopy_send_server": true, 00:22:31.782 "impl_name": "ssl", 00:22:31.782 "recv_buf_size": 4096, 00:22:31.782 "send_buf_size": 4096, 00:22:31.782 "tls_version": 0, 00:22:31.782 "zerocopy_threshold": 0 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "sock_impl_set_options", 00:22:31.782 "params": { 00:22:31.782 "enable_ktls": false, 00:22:31.782 "enable_placement_id": 0, 00:22:31.782 "enable_quickack": false, 00:22:31.782 "enable_recv_pipe": true, 00:22:31.782 "enable_zerocopy_send_client": false, 00:22:31.782 "enable_zerocopy_send_server": true, 00:22:31.782 "impl_name": "posix", 00:22:31.782 "recv_buf_size": 2097152, 00:22:31.782 "send_buf_size": 2097152, 00:22:31.782 "tls_version": 0, 00:22:31.782 "zerocopy_threshold": 0 00:22:31.782 } 00:22:31.782 } 00:22:31.782 ] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "vmd", 00:22:31.782 "config": [] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "accel", 00:22:31.782 "config": [ 00:22:31.782 { 00:22:31.782 "method": "accel_set_options", 00:22:31.782 "params": { 00:22:31.782 "buf_count": 2048, 00:22:31.782 "large_cache_size": 16, 00:22:31.782 "sequence_count": 2048, 00:22:31.782 "small_cache_size": 128, 00:22:31.782 "task_count": 2048 00:22:31.782 } 00:22:31.782 } 00:22:31.782 ] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "bdev", 00:22:31.782 "config": [ 00:22:31.782 { 00:22:31.782 "method": "bdev_set_options", 00:22:31.782 "params": { 00:22:31.782 "bdev_auto_examine": true, 00:22:31.782 "bdev_io_cache_size": 256, 00:22:31.782 "bdev_io_pool_size": 65535, 00:22:31.782 "iobuf_large_cache_size": 16, 00:22:31.782 "iobuf_small_cache_size": 128 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "bdev_raid_set_options", 00:22:31.782 "params": { 00:22:31.782 "process_max_bandwidth_mb_sec": 0, 00:22:31.782 "process_window_size_kb": 1024 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "bdev_iscsi_set_options", 00:22:31.782 "params": { 00:22:31.782 "timeout_sec": 30 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "bdev_nvme_set_options", 00:22:31.782 "params": { 00:22:31.782 "action_on_timeout": "none", 00:22:31.782 "allow_accel_sequence": false, 00:22:31.782 "arbitration_burst": 0, 00:22:31.782 "bdev_retry_count": 3, 00:22:31.782 "ctrlr_loss_timeout_sec": 0, 00:22:31.782 "delay_cmd_submit": true, 00:22:31.782 "dhchap_dhgroups": [ 00:22:31.782 "null", 00:22:31.782 "ffdhe2048", 00:22:31.782 "ffdhe3072", 00:22:31.782 "ffdhe4096", 00:22:31.782 "ffdhe6144", 00:22:31.782 "ffdhe8192" 00:22:31.782 ], 00:22:31.782 "dhchap_digests": [ 00:22:31.782 "sha256", 00:22:31.782 "sha384", 00:22:31.782 "sha512" 00:22:31.782 ], 00:22:31.782 "disable_auto_failback": false, 00:22:31.782 "fast_io_fail_timeout_sec": 0, 00:22:31.782 "generate_uuids": false, 00:22:31.782 "high_priority_weight": 0, 00:22:31.782 "io_path_stat": false, 00:22:31.782 "io_queue_requests": 0, 00:22:31.782 "keep_alive_timeout_ms": 10000, 00:22:31.782 "low_priority_weight": 0, 00:22:31.782 "medium_priority_weight": 0, 00:22:31.782 "nvme_adminq_poll_period_us": 10000, 00:22:31.782 "nvme_error_stat": false, 00:22:31.782 "nvme_ioq_poll_period_us": 0, 00:22:31.782 "rdma_cm_event_timeout_ms": 0, 00:22:31.782 "rdma_max_cq_size": 0, 00:22:31.782 "rdma_srq_size": 0, 00:22:31.782 "reconnect_delay_sec": 0, 00:22:31.782 "timeout_admin_us": 0, 00:22:31.782 "timeout_us": 0, 00:22:31.782 "transport_ack_timeout": 0, 00:22:31.782 "transport_retry_count": 4, 00:22:31.782 "transport_tos": 0 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "bdev_nvme_set_hotplug", 00:22:31.782 "params": { 00:22:31.782 "enable": false, 00:22:31.782 "period_us": 100000 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "bdev_malloc_create", 00:22:31.782 "params": { 00:22:31.782 "block_size": 4096, 00:22:31.782 "dif_is_head_of_md": false, 00:22:31.782 "dif_pi_format": 0, 00:22:31.782 "dif_type": 0, 00:22:31.782 "md_size": 0, 00:22:31.782 "name": "malloc0", 00:22:31.782 "num_blocks": 8192, 00:22:31.782 "optimal_io_boundary": 0, 00:22:31.782 "physical_block_size": 4096, 00:22:31.782 "uuid": "5c22f952-3cd0-4857-9e5d-23397f068dc0" 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "bdev_wait_for_examine" 00:22:31.782 } 00:22:31.782 ] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "nbd", 00:22:31.782 "config": [] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "scheduler", 00:22:31.782 "config": [ 00:22:31.782 { 00:22:31.782 "method": "framework_set_scheduler", 00:22:31.782 "params": { 00:22:31.782 "name": "static" 00:22:31.782 } 00:22:31.782 } 00:22:31.782 ] 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "subsystem": "nvmf", 00:22:31.782 "config": [ 00:22:31.782 { 00:22:31.782 "method": "nvmf_set_config", 00:22:31.782 "params": { 00:22:31.782 "admin_cmd_passthru": { 00:22:31.782 "identify_ctrlr": false 00:22:31.782 }, 00:22:31.782 "dhchap_dhgroups": [ 00:22:31.782 "null", 00:22:31.782 "ffdhe2048", 00:22:31.782 "ffdhe3072", 00:22:31.782 "ffdhe4096", 00:22:31.782 "ffdhe6144", 00:22:31.782 "ffdhe8192" 00:22:31.782 ], 00:22:31.782 "dhchap_digests": [ 00:22:31.782 "sha256", 00:22:31.782 "sha384", 00:22:31.782 "sha512" 00:22:31.782 ], 00:22:31.782 "discovery_filter": "match_any" 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "nvmf_set_max_subsystems", 00:22:31.782 "params": { 00:22:31.782 "max_subsystems": 1024 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "nvmf_set_crdt", 00:22:31.782 "params": { 00:22:31.782 "crdt1": 0, 00:22:31.782 "crdt2": 0, 00:22:31.782 "crdt3": 0 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "nvmf_create_transport", 00:22:31.782 "params": { 00:22:31.782 "abort_timeout_sec": 1, 00:22:31.782 "ack_timeout": 0, 00:22:31.782 "buf_cache_size": 4294967295, 00:22:31.782 "c2h_success": false, 00:22:31.782 "data_wr_pool_size": 0, 00:22:31.782 "dif_insert_or_strip": false, 00:22:31.782 "in_capsule_data_size": 4096, 00:22:31.782 "io_unit_size": 131072, 00:22:31.782 "max_aq_depth": 128, 00:22:31.782 "max_io_qpairs_per_ctrlr": 127, 00:22:31.782 "max_io_size": 131072, 00:22:31.782 "max_queue_depth": 128, 00:22:31.782 "num_shared_buffers": 511, 00:22:31.782 "sock_priority": 0, 00:22:31.782 "trtype": "TCP", 00:22:31.782 "zcopy": false 00:22:31.782 } 00:22:31.782 }, 00:22:31.782 { 00:22:31.782 "method": "nvmf_create_subsystem", 00:22:31.783 "params": { 00:22:31.783 "allow_any_host": false, 00:22:31.783 "ana_reporting": false, 00:22:31.783 "max_cntlid": 65519, 00:22:31.783 "max_namespaces": 32, 00:22:31.783 "min_cntlid": 1, 00:22:31.783 "model_number": "SPDK bdev Controller", 00:22:31.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.783 "serial_number": "00000000000000000000" 00:22:31.783 } 00:22:31.783 }, 00:22:31.783 { 00:22:31.783 "method": "nvmf_subsystem_add_host", 00:22:31.783 "params": { 00:22:31.783 "host": "nqn.2016-06.io.spdk:host1", 00:22:31.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.783 "psk": "key0" 00:22:31.783 } 00:22:31.783 }, 00:22:31.783 { 00:22:31.783 "method": "nvmf_subsystem_add_ns", 00:22:31.783 "params": { 00:22:31.783 "namespace": { 00:22:31.783 "bdev_name": "malloc0", 00:22:31.783 "nguid": "5C22F9523CD048579E5D23397F068DC0", 00:22:31.783 "no_auto_visible": false, 00:22:31.783 "nsid": 1, 00:22:31.783 "uuid": "5c22f952-3cd0-4857-9e5d-23397f068dc0" 00:22:31.783 }, 00:22:31.783 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:31.783 } 00:22:31.783 }, 00:22:31.783 { 00:22:31.783 "method": "nvmf_subsystem_add_listener", 00:22:31.783 "params": { 00:22:31.783 "listen_address": { 00:22:31.783 "adrfam": "IPv4", 00:22:31.783 "traddr": "10.0.0.3", 00:22:31.783 "trsvcid": "4420", 00:22:31.783 "trtype": "TCP" 00:22:31.783 }, 00:22:31.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.783 "secure_channel": false, 00:22:31.783 "sock_impl": "ssl" 00:22:31.783 } 00:22:31.783 } 00:22:31.783 ] 00:22:31.783 } 00:22:31.783 ] 00:22:31.783 }' 00:22:31.783 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:32.348 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:32.348 "subsystems": [ 00:22:32.348 { 00:22:32.348 "subsystem": "keyring", 00:22:32.348 "config": [ 00:22:32.348 { 00:22:32.348 "method": "keyring_file_add_key", 00:22:32.348 "params": { 00:22:32.348 "name": "key0", 00:22:32.348 "path": "/tmp/tmp.QttsGYeAgw" 00:22:32.348 } 00:22:32.348 } 00:22:32.348 ] 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "subsystem": "iobuf", 00:22:32.348 "config": [ 00:22:32.348 { 00:22:32.348 "method": "iobuf_set_options", 00:22:32.348 "params": { 00:22:32.348 "enable_numa": false, 00:22:32.348 "large_bufsize": 135168, 00:22:32.348 "large_pool_count": 1024, 00:22:32.348 "small_bufsize": 8192, 00:22:32.348 "small_pool_count": 8192 00:22:32.348 } 00:22:32.348 } 00:22:32.348 ] 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "subsystem": "sock", 00:22:32.348 "config": [ 00:22:32.348 { 00:22:32.348 "method": "sock_set_default_impl", 00:22:32.348 "params": { 00:22:32.348 "impl_name": "posix" 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "sock_impl_set_options", 00:22:32.348 "params": { 00:22:32.348 "enable_ktls": false, 00:22:32.348 "enable_placement_id": 0, 00:22:32.348 "enable_quickack": false, 00:22:32.348 "enable_recv_pipe": true, 00:22:32.348 "enable_zerocopy_send_client": false, 00:22:32.348 "enable_zerocopy_send_server": true, 00:22:32.348 "impl_name": "ssl", 00:22:32.348 "recv_buf_size": 4096, 00:22:32.348 "send_buf_size": 4096, 00:22:32.348 "tls_version": 0, 00:22:32.348 "zerocopy_threshold": 0 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "sock_impl_set_options", 00:22:32.348 "params": { 00:22:32.348 "enable_ktls": false, 00:22:32.348 "enable_placement_id": 0, 00:22:32.348 "enable_quickack": false, 00:22:32.348 "enable_recv_pipe": true, 00:22:32.348 "enable_zerocopy_send_client": false, 00:22:32.348 "enable_zerocopy_send_server": true, 00:22:32.348 "impl_name": "posix", 00:22:32.348 "recv_buf_size": 2097152, 00:22:32.348 "send_buf_size": 2097152, 00:22:32.348 "tls_version": 0, 00:22:32.348 "zerocopy_threshold": 0 00:22:32.348 } 00:22:32.348 } 00:22:32.348 ] 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "subsystem": "vmd", 00:22:32.348 "config": [] 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "subsystem": "accel", 00:22:32.348 "config": [ 00:22:32.348 { 00:22:32.348 "method": "accel_set_options", 00:22:32.348 "params": { 00:22:32.348 "buf_count": 2048, 00:22:32.348 "large_cache_size": 16, 00:22:32.348 "sequence_count": 2048, 00:22:32.348 "small_cache_size": 128, 00:22:32.348 "task_count": 2048 00:22:32.348 } 00:22:32.348 } 00:22:32.348 ] 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "subsystem": "bdev", 00:22:32.348 "config": [ 00:22:32.348 { 00:22:32.348 "method": "bdev_set_options", 00:22:32.348 "params": { 00:22:32.348 "bdev_auto_examine": true, 00:22:32.348 "bdev_io_cache_size": 256, 00:22:32.348 "bdev_io_pool_size": 65535, 00:22:32.348 "iobuf_large_cache_size": 16, 00:22:32.348 "iobuf_small_cache_size": 128 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "bdev_raid_set_options", 00:22:32.348 "params": { 00:22:32.348 "process_max_bandwidth_mb_sec": 0, 00:22:32.348 "process_window_size_kb": 1024 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "bdev_iscsi_set_options", 00:22:32.348 "params": { 00:22:32.348 "timeout_sec": 30 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "bdev_nvme_set_options", 00:22:32.348 "params": { 00:22:32.348 "action_on_timeout": "none", 00:22:32.348 "allow_accel_sequence": false, 00:22:32.348 "arbitration_burst": 0, 00:22:32.348 "bdev_retry_count": 3, 00:22:32.348 "ctrlr_loss_timeout_sec": 0, 00:22:32.348 "delay_cmd_submit": true, 00:22:32.348 "dhchap_dhgroups": [ 00:22:32.348 "null", 00:22:32.348 "ffdhe2048", 00:22:32.348 "ffdhe3072", 00:22:32.348 "ffdhe4096", 00:22:32.348 "ffdhe6144", 00:22:32.348 "ffdhe8192" 00:22:32.348 ], 00:22:32.348 "dhchap_digests": [ 00:22:32.348 "sha256", 00:22:32.348 "sha384", 00:22:32.348 "sha512" 00:22:32.348 ], 00:22:32.348 "disable_auto_failback": false, 00:22:32.348 "fast_io_fail_timeout_sec": 0, 00:22:32.348 "generate_uuids": false, 00:22:32.348 "high_priority_weight": 0, 00:22:32.348 "io_path_stat": false, 00:22:32.348 "io_queue_requests": 512, 00:22:32.348 "keep_alive_timeout_ms": 10000, 00:22:32.348 "low_priority_weight": 0, 00:22:32.348 "medium_priority_weight": 0, 00:22:32.348 "nvme_adminq_poll_period_us": 10000, 00:22:32.348 "nvme_error_stat": false, 00:22:32.348 "nvme_ioq_poll_period_us": 0, 00:22:32.348 "rdma_cm_event_timeout_ms": 0, 00:22:32.348 "rdma_max_cq_size": 0, 00:22:32.348 "rdma_srq_size": 0, 00:22:32.348 "reconnect_delay_sec": 0, 00:22:32.348 "timeout_admin_us": 0, 00:22:32.348 "timeout_us": 0, 00:22:32.348 "transport_ack_timeout": 0, 00:22:32.348 "transport_retry_count": 4, 00:22:32.348 "transport_tos": 0 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "bdev_nvme_attach_controller", 00:22:32.348 "params": { 00:22:32.348 "adrfam": "IPv4", 00:22:32.348 "ctrlr_loss_timeout_sec": 0, 00:22:32.348 "ddgst": false, 00:22:32.348 "fast_io_fail_timeout_sec": 0, 00:22:32.348 "hdgst": false, 00:22:32.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.348 "multipath": "multipath", 00:22:32.348 "name": "nvme0", 00:22:32.348 "prchk_guard": false, 00:22:32.348 "prchk_reftag": false, 00:22:32.348 "psk": "key0", 00:22:32.348 "reconnect_delay_sec": 0, 00:22:32.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.348 "traddr": "10.0.0.3", 00:22:32.348 "trsvcid": "4420", 00:22:32.348 "trtype": "TCP" 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "bdev_nvme_set_hotplug", 00:22:32.348 "params": { 00:22:32.348 "enable": false, 00:22:32.348 "period_us": 100000 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "bdev_enable_histogram", 00:22:32.348 "params": { 00:22:32.348 "enable": true, 00:22:32.348 "name": "nvme0n1" 00:22:32.348 } 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "method": "bdev_wait_for_examine" 00:22:32.348 } 00:22:32.348 ] 00:22:32.348 }, 00:22:32.348 { 00:22:32.348 "subsystem": "nbd", 00:22:32.348 "config": [] 00:22:32.348 } 00:22:32.348 ] 00:22:32.348 }' 00:22:32.348 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84581 00:22:32.348 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84581 ']' 00:22:32.348 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84581 00:22:32.348 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.348 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.348 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84581 00:22:32.348 killing process with pid 84581 00:22:32.348 Received shutdown signal, test time was about 1.000000 seconds 00:22:32.348 00:22:32.348 Latency(us) 00:22:32.348 [2024-12-06T13:16:38.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.349 [2024-12-06T13:16:38.877Z] =================================================================================================================== 00:22:32.349 [2024-12-06T13:16:38.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84581' 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84581 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84581 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84549 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84549 ']' 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84549 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84549 00:22:32.349 killing process with pid 84549 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84549' 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84549 00:22:32.349 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84549 00:22:32.607 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:32.607 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.608 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.608 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:32.608 "subsystems": [ 00:22:32.608 { 00:22:32.608 "subsystem": "keyring", 00:22:32.608 "config": [ 00:22:32.608 { 00:22:32.608 "method": "keyring_file_add_key", 00:22:32.608 "params": { 00:22:32.608 "name": "key0", 00:22:32.608 "path": "/tmp/tmp.QttsGYeAgw" 00:22:32.608 } 00:22:32.608 } 00:22:32.608 ] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "iobuf", 00:22:32.608 "config": [ 00:22:32.608 { 00:22:32.608 "method": "iobuf_set_options", 00:22:32.608 "params": { 00:22:32.608 "enable_numa": false, 00:22:32.608 "large_bufsize": 135168, 00:22:32.608 "large_pool_count": 1024, 00:22:32.608 "small_bufsize": 8192, 00:22:32.608 "small_pool_count": 8192 00:22:32.608 } 00:22:32.608 } 00:22:32.608 ] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "sock", 00:22:32.608 "config": [ 00:22:32.608 { 00:22:32.608 "method": "sock_set_default_impl", 00:22:32.608 "params": { 00:22:32.608 "impl_name": "posix" 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "sock_impl_set_options", 00:22:32.608 "params": { 00:22:32.608 "enable_ktls": false, 00:22:32.608 "enable_placement_id": 0, 00:22:32.608 "enable_quickack": false, 00:22:32.608 "enable_recv_pipe": true, 00:22:32.608 "enable_zerocopy_send_client": false, 00:22:32.608 "enable_zerocopy_send_server": true, 00:22:32.608 "impl_name": "ssl", 00:22:32.608 "recv_buf_size": 4096, 00:22:32.608 "send_buf_size": 4096, 00:22:32.608 "tls_version": 0, 00:22:32.608 "zerocopy_threshold": 0 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "sock_impl_set_options", 00:22:32.608 "params": { 00:22:32.608 "enable_ktls": false, 00:22:32.608 "enable_placement_id": 0, 00:22:32.608 "enable_quickack": false, 00:22:32.608 "enable_recv_pipe": true, 00:22:32.608 "enable_zerocopy_send_client": false, 00:22:32.608 "enable_zerocopy_send_server": true, 00:22:32.608 "impl_name": "posix", 00:22:32.608 "recv_buf_size": 2097152, 00:22:32.608 "send_buf_size": 2097152, 00:22:32.608 "tls_version": 0, 00:22:32.608 "zerocopy_threshold": 0 00:22:32.608 } 00:22:32.608 } 00:22:32.608 ] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "vmd", 00:22:32.608 "config": [] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "accel", 00:22:32.608 "config": [ 00:22:32.608 { 00:22:32.608 "method": "accel_set_options", 00:22:32.608 "params": { 00:22:32.608 "buf_count": 2048, 00:22:32.608 "large_cache_size": 16, 00:22:32.608 "sequence_count": 2048, 00:22:32.608 "small_cache_size": 128, 00:22:32.608 "task_count": 2048 00:22:32.608 } 00:22:32.608 } 00:22:32.608 ] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "bdev", 00:22:32.608 "config": [ 00:22:32.608 { 00:22:32.608 "method": "bdev_set_options", 00:22:32.608 "params": { 00:22:32.608 "bdev_auto_examine": true, 00:22:32.608 "bdev_io_cache_size": 256, 00:22:32.608 "bdev_io_pool_size": 65535, 00:22:32.608 "iobuf_large_cache_size": 16, 00:22:32.608 "iobuf_small_cache_size": 128 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "bdev_raid_set_options", 00:22:32.608 "params": { 00:22:32.608 "process_max_bandwidth_mb_sec": 0, 00:22:32.608 "process_window_size_kb": 1024 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "bdev_iscsi_set_options", 00:22:32.608 "params": { 00:22:32.608 "timeout_sec": 30 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "bdev_nvme_set_options", 00:22:32.608 "params": { 00:22:32.608 "action_on_timeout": "none", 00:22:32.608 "allow_accel_sequence": false, 00:22:32.608 "arbitration_burst": 0, 00:22:32.608 "bdev_retry_count": 3, 00:22:32.608 "ctrlr_loss_timeout_sec": 0, 00:22:32.608 "delay_cmd_submit": true, 00:22:32.608 "dhchap_dhgroups": [ 00:22:32.608 "null", 00:22:32.608 "ffdhe2048", 00:22:32.608 "ffdhe3072", 00:22:32.608 "ffdhe4096", 00:22:32.608 "ffdhe6144", 00:22:32.608 "ffdhe8192" 00:22:32.608 ], 00:22:32.608 "dhchap_digests": [ 00:22:32.608 "sha256", 00:22:32.608 "sha384", 00:22:32.608 "sha512" 00:22:32.608 ], 00:22:32.608 "disable_auto_failback": false, 00:22:32.608 "fast_io_fail_timeout_sec": 0, 00:22:32.608 "generate_uuids": false, 00:22:32.608 "high_priority_weight": 0, 00:22:32.608 "io_path_stat": false, 00:22:32.608 "io_queue_requests": 0, 00:22:32.608 "keep_alive_timeout_ms": 10000, 00:22:32.608 "low_priority_weight": 0, 00:22:32.608 "medium_priority_weight": 0, 00:22:32.608 "nvme_adminq_poll_period_us": 10000, 00:22:32.608 "nvme_error_stat": false, 00:22:32.608 "nvme_ioq_poll_period_us": 0, 00:22:32.608 "rdma_cm_event_timeout_ms": 0, 00:22:32.608 "rdma_max_cq_size": 0, 00:22:32.608 "rdma_srq_size": 0, 00:22:32.608 "reconnect_delay_sec": 0, 00:22:32.608 "timeout_admin_us": 0, 00:22:32.608 "timeout_us": 0, 00:22:32.608 "transport_ack_timeout": 0, 00:22:32.608 "transport_retry_count": 4, 00:22:32.608 "transport_tos": 0 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "bdev_nvme_set_hotplug", 00:22:32.608 "params": { 00:22:32.608 "enable": false, 00:22:32.608 "period_us": 100000 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "bdev_malloc_create", 00:22:32.608 "params": { 00:22:32.608 "block_size": 4096, 00:22:32.608 "dif_is_head_of_md": false, 00:22:32.608 "dif_pi_format": 0, 00:22:32.608 "dif_type": 0, 00:22:32.608 "md_size": 0, 00:22:32.608 "name": "malloc0", 00:22:32.608 "num_blocks": 8192, 00:22:32.608 "optimal_io_boundary": 0, 00:22:32.608 "physical_block_size": 4096, 00:22:32.608 "uuid": "5c22f952-3cd0-4857-9e5d-23397f068dc0" 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "bdev_wait_for_examine" 00:22:32.608 } 00:22:32.608 ] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "nbd", 00:22:32.608 "config": [] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "scheduler", 00:22:32.608 "config": [ 00:22:32.608 { 00:22:32.608 "method": "framework_set_scheduler", 00:22:32.608 "params": { 00:22:32.608 "name": "static" 00:22:32.608 } 00:22:32.608 } 00:22:32.608 ] 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "subsystem": "nvmf", 00:22:32.608 "config": [ 00:22:32.608 { 00:22:32.608 "method": "nvmf_set_config", 00:22:32.608 "params": { 00:22:32.608 "admin_cmd_passthru": { 00:22:32.608 "identify_ctrlr": false 00:22:32.608 }, 00:22:32.608 "dhchap_dhgroups": [ 00:22:32.608 "null", 00:22:32.608 "ffdhe2048", 00:22:32.608 "ffdhe3072", 00:22:32.608 "ffdhe4096", 00:22:32.608 "ffdhe6144", 00:22:32.608 "ffdhe8192" 00:22:32.608 ], 00:22:32.608 "dhchap_digests": [ 00:22:32.608 "sha256", 00:22:32.608 "sha384", 00:22:32.608 "sha512" 00:22:32.608 ], 00:22:32.608 "discovery_filter": "match_any" 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "nvmf_set_max_subsystems", 00:22:32.608 "params": { 00:22:32.608 "max_subsystems": 1024 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "nvmf_set_crdt", 00:22:32.608 "params": { 00:22:32.608 "crdt1": 0, 00:22:32.608 "crdt2": 0, 00:22:32.608 "crdt3": 0 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "nvmf_create_transport", 00:22:32.608 "params": { 00:22:32.608 "abort_timeout_sec": 1, 00:22:32.608 "ack_timeout": 0, 00:22:32.608 "buf_cache_size": 4294967295, 00:22:32.608 "c2h_success": false, 00:22:32.608 "data_wr_pool_size": 0, 00:22:32.608 "dif_insert_or_strip": false, 00:22:32.608 "in_capsule_data_size": 4096, 00:22:32.608 "io_unit_size": 131072, 00:22:32.608 "max_aq_depth": 128, 00:22:32.608 "max_io_qpairs_per_ctrlr": 127, 00:22:32.608 "max_io_size": 131072, 00:22:32.608 "max_queue_depth": 128, 00:22:32.608 "num_shared_buffers": 511, 00:22:32.608 "sock_priority": 0, 00:22:32.608 "trtype": "TCP", 00:22:32.608 "zcopy": false 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "nvmf_create_subsystem", 00:22:32.608 "params": { 00:22:32.608 "allow_any_host": false, 00:22:32.608 "ana_reporting": false, 00:22:32.608 "max_cntlid": 65519, 00:22:32.608 "max_namespaces": 32, 00:22:32.608 "min_cntlid": 1, 00:22:32.608 "model_number": "SPDK bdev Controller", 00:22:32.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.608 "serial_number": "00000000000000000000" 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.608 "method": "nvmf_subsystem_add_host", 00:22:32.608 "params": { 00:22:32.608 "host": "nqn.2016-06.io.spdk:host1", 00:22:32.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.608 "psk": "key0" 00:22:32.608 } 00:22:32.608 }, 00:22:32.608 { 00:22:32.609 "method": "nvmf_subsystem_add_ns", 00:22:32.609 "params": { 00:22:32.609 "namespace": { 00:22:32.609 "bdev_name": "malloc0", 00:22:32.609 "nguid": "5C22F9523CD048579E5D23397F068DC0", 00:22:32.609 "no_auto_visible": false, 00:22:32.609 "nsid": 1, 00:22:32.609 "uuid": "5c22f952-3cd0-4857-9e5d-23397f068dc0" 00:22:32.609 }, 00:22:32.609 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:32.609 } 00:22:32.609 }, 00:22:32.609 { 00:22:32.609 "method": "nvmf_subsystem_add_listener", 00:22:32.609 "params": { 00:22:32.609 "listen_address": { 00:22:32.609 "adrfam": "IPv4", 00:22:32.609 "traddr": "10.0.0.3", 00:22:32.609 "trsvcid": "4420", 00:22:32.609 "trtype": "TCP" 00:22:32.609 }, 00:22:32.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.609 "secure_channel": false, 00:22:32.609 "sock_impl": "ssl" 00:22:32.609 } 00:22:32.609 } 00:22:32.609 ] 00:22:32.609 } 00:22:32.609 ] 00:22:32.609 }' 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84657 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84657 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84657 ']' 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.609 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.609 [2024-12-06 13:16:39.002042] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:32.609 [2024-12-06 13:16:39.002195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.865 [2024-12-06 13:16:39.163296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.865 [2024-12-06 13:16:39.201329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.865 [2024-12-06 13:16:39.201399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.865 [2024-12-06 13:16:39.201413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.866 [2024-12-06 13:16:39.201423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.866 [2024-12-06 13:16:39.201432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.866 [2024-12-06 13:16:39.201860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.122 [2024-12-06 13:16:39.402821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.122 [2024-12-06 13:16:39.434756] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.122 [2024-12-06 13:16:39.435058] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84701 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84701 /var/tmp/bdevperf.sock 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84701 ']' 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.688 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:33.688 "subsystems": [ 00:22:33.688 { 00:22:33.688 "subsystem": "keyring", 00:22:33.688 "config": [ 00:22:33.688 { 00:22:33.688 "method": "keyring_file_add_key", 00:22:33.688 "params": { 00:22:33.688 "name": "key0", 00:22:33.688 "path": "/tmp/tmp.QttsGYeAgw" 00:22:33.688 } 00:22:33.688 } 00:22:33.688 ] 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "subsystem": "iobuf", 00:22:33.688 "config": [ 00:22:33.688 { 00:22:33.688 "method": "iobuf_set_options", 00:22:33.688 "params": { 00:22:33.688 "enable_numa": false, 00:22:33.688 "large_bufsize": 135168, 00:22:33.688 "large_pool_count": 1024, 00:22:33.688 "small_bufsize": 8192, 00:22:33.688 "small_pool_count": 8192 00:22:33.688 } 00:22:33.688 } 00:22:33.688 ] 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "subsystem": "sock", 00:22:33.688 "config": [ 00:22:33.688 { 00:22:33.688 "method": "sock_set_default_impl", 00:22:33.688 "params": { 00:22:33.688 "impl_name": "posix" 00:22:33.688 } 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "method": "sock_impl_set_options", 00:22:33.688 "params": { 00:22:33.688 "enable_ktls": false, 00:22:33.688 "enable_placement_id": 0, 00:22:33.688 "enable_quickack": false, 00:22:33.688 "enable_recv_pipe": true, 00:22:33.688 "enable_zerocopy_send_client": false, 00:22:33.688 "enable_zerocopy_send_server": true, 00:22:33.688 "impl_name": "ssl", 00:22:33.688 "recv_buf_size": 4096, 00:22:33.688 "send_buf_size": 4096, 00:22:33.688 "tls_version": 0, 00:22:33.688 "zerocopy_threshold": 0 00:22:33.688 } 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "method": "sock_impl_set_options", 00:22:33.688 "params": { 00:22:33.688 "enable_ktls": false, 00:22:33.688 "enable_placement_id": 0, 00:22:33.688 "enable_quickack": false, 00:22:33.688 "enable_recv_pipe": true, 00:22:33.688 "enable_zerocopy_send_client": false, 00:22:33.688 "enable_zerocopy_send_server": true, 00:22:33.688 "impl_name": "posix", 00:22:33.688 "recv_buf_size": 2097152, 00:22:33.688 "send_buf_size": 2097152, 00:22:33.688 "tls_version": 0, 00:22:33.688 "zerocopy_threshold": 0 00:22:33.688 } 00:22:33.688 } 00:22:33.688 ] 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "subsystem": "vmd", 00:22:33.688 "config": [] 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "subsystem": "accel", 00:22:33.688 "config": [ 00:22:33.688 { 00:22:33.688 "method": "accel_set_options", 00:22:33.688 "params": { 00:22:33.688 "buf_count": 2048, 00:22:33.688 "large_cache_size": 16, 00:22:33.688 "sequence_count": 2048, 00:22:33.688 "small_cache_size": 128, 00:22:33.688 "task_count": 2048 00:22:33.688 } 00:22:33.688 } 00:22:33.688 ] 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "subsystem": "bdev", 00:22:33.688 "config": [ 00:22:33.688 { 00:22:33.688 "method": "bdev_set_options", 00:22:33.688 "params": { 00:22:33.688 "bdev_auto_examine": true, 00:22:33.688 "bdev_io_cache_size": 256, 00:22:33.688 "bdev_io_pool_size": 65535, 00:22:33.688 "iobuf_large_cache_size": 16, 00:22:33.688 "iobuf_small_cache_size": 128 00:22:33.688 } 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "method": "bdev_raid_set_options", 00:22:33.688 "params": { 00:22:33.688 "process_max_bandwidth_mb_sec": 0, 00:22:33.688 "process_window_size_kb": 1024 00:22:33.688 } 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "method": "bdev_iscsi_set_options", 00:22:33.688 "params": { 00:22:33.688 "timeout_sec": 30 00:22:33.688 } 00:22:33.688 }, 00:22:33.688 { 00:22:33.688 "method": "bdev_nvme_set_options", 00:22:33.688 "params": { 00:22:33.688 "action_on_timeout": "none", 00:22:33.688 "allow_accel_sequence": false, 00:22:33.688 "arbitration_burst": 0, 00:22:33.688 "bdev_retry_count": 3, 00:22:33.688 "ctrlr_loss_timeout_sec": 0, 00:22:33.688 "delay_cmd_submit": true, 00:22:33.688 "dhchap_dhgroups": [ 00:22:33.688 "null", 00:22:33.688 "ffdhe2048", 00:22:33.688 "ffdhe3072", 00:22:33.688 "ffdhe4096", 00:22:33.688 "ffdhe6144", 00:22:33.688 "ffdhe8192" 00:22:33.688 ], 00:22:33.688 "dhchap_digests": [ 00:22:33.688 "sha256", 00:22:33.688 "sha384", 00:22:33.688 "sha512" 00:22:33.688 ], 00:22:33.688 "disable_auto_failback": false, 00:22:33.688 "fast_io_fail_timeout_sec": 0, 00:22:33.688 "generate_uuids": false, 00:22:33.688 "high_priority_weight": 0, 00:22:33.688 "io_path_stat": false, 00:22:33.688 "io_queue_requests": 512, 00:22:33.688 "keep_alive_timeout_ms": 10000, 00:22:33.688 "low_priority_weight": 0, 00:22:33.688 "medium_priority_weight": 0, 00:22:33.688 "nvme_adminq_poll_period_us": 10000, 00:22:33.688 "nvme_error_stat": false, 00:22:33.688 "nvme_ioq_poll_period_us": 0, 00:22:33.688 "rdma_cm_event_timeout_ms": 0, 00:22:33.689 "rdma_max_cq_size": 0, 00:22:33.689 "rdma_srq_size": 0, 00:22:33.689 "reconnect_delay_sec": 0, 00:22:33.689 "timeout_admin_us": 0, 00:22:33.689 "timeout_us": 0, 00:22:33.689 "transport_ack_timeout": 0, 00:22:33.689 "transport_retry_count": 4, 00:22:33.689 "transport_tos": 0 00:22:33.689 } 00:22:33.689 }, 00:22:33.689 { 00:22:33.689 "method": "bdev_nvme_attach_controller", 00:22:33.689 "params": { 00:22:33.689 "adrfam": "IPv4", 00:22:33.689 "ctrlr_loss_timeout_sec": 0, 00:22:33.689 "ddgst": false, 00:22:33.689 "fast_io_fail_timeout_sec": 0, 00:22:33.689 "hdgst": false, 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.689 "multipath": "multipath", 00:22:33.689 "name": "nvme0", 00:22:33.689 "prchk_guard": false, 00:22:33.689 "prchk_reftag": false, 00:22:33.689 "psk": "key0", 00:22:33.689 "reconnect_delay_sec": 0, 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.689 "traddr": "10.0.0.3", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "trtype": "TCP" 00:22:33.689 } 00:22:33.689 }, 00:22:33.689 { 00:22:33.689 "method": "bdev_nvme_set_hotplug", 00:22:33.689 "params": { 00:22:33.689 "enable": false, 00:22:33.689 "period_us": 100000 00:22:33.689 } 00:22:33.689 }, 00:22:33.689 { 00:22:33.689 "method": "bdev_enable_histogram", 00:22:33.689 "params": { 00:22:33.689 "enable": true, 00:22:33.689 "name": "nvme0n1" 00:22:33.689 } 00:22:33.689 }, 00:22:33.689 { 00:22:33.689 "method": "bdev_wait_for_examine" 00:22:33.689 } 00:22:33.689 ] 00:22:33.689 }, 00:22:33.689 { 00:22:33.689 "subsystem": "nbd", 00:22:33.689 "config": [] 00:22:33.689 } 00:22:33.689 ] 00:22:33.689 }' 00:22:33.946 [2024-12-06 13:16:40.249400] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:33.946 [2024-12-06 13:16:40.249534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84701 ] 00:22:33.946 [2024-12-06 13:16:40.397991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.946 [2024-12-06 13:16:40.448258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.203 [2024-12-06 13:16:40.588620] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.136 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.136 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:35.136 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:35.136 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:35.393 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.393 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.652 Running I/O for 1 seconds... 00:22:36.588 3736.00 IOPS, 14.59 MiB/s 00:22:36.588 Latency(us) 00:22:36.588 [2024-12-06T13:16:43.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.588 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:36.588 Verification LBA range: start 0x0 length 0x2000 00:22:36.588 nvme0n1 : 1.02 3804.09 14.86 0.00 0.00 33350.57 5421.61 31695.59 00:22:36.588 [2024-12-06T13:16:43.116Z] =================================================================================================================== 00:22:36.588 [2024-12-06T13:16:43.116Z] Total : 3804.09 14.86 0.00 0.00 33350.57 5421.61 31695.59 00:22:36.588 { 00:22:36.588 "results": [ 00:22:36.588 { 00:22:36.588 "job": "nvme0n1", 00:22:36.588 "core_mask": "0x2", 00:22:36.588 "workload": "verify", 00:22:36.588 "status": "finished", 00:22:36.588 "verify_range": { 00:22:36.588 "start": 0, 00:22:36.588 "length": 8192 00:22:36.588 }, 00:22:36.588 "queue_depth": 128, 00:22:36.588 "io_size": 4096, 00:22:36.588 "runtime": 1.01575, 00:22:36.588 "iops": 3804.0856509968003, 00:22:36.588 "mibps": 14.859709574206251, 00:22:36.588 "io_failed": 0, 00:22:36.588 "io_timeout": 0, 00:22:36.588 "avg_latency_us": 33350.56792019575, 00:22:36.588 "min_latency_us": 5421.614545454546, 00:22:36.588 "max_latency_us": 31695.592727272728 00:22:36.588 } 00:22:36.588 ], 00:22:36.588 "core_count": 1 00:22:36.588 } 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:36.588 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:36.588 nvmf_trace.0 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84701 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84701 ']' 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84701 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84701 00:22:36.847 killing process with pid 84701 00:22:36.847 Received shutdown signal, test time was about 1.000000 seconds 00:22:36.847 00:22:36.847 Latency(us) 00:22:36.847 [2024-12-06T13:16:43.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.847 [2024-12-06T13:16:43.375Z] =================================================================================================================== 00:22:36.847 [2024-12-06T13:16:43.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84701' 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84701 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84701 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:36.847 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.105 rmmod nvme_tcp 00:22:37.105 rmmod nvme_fabrics 00:22:37.105 rmmod nvme_keyring 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84657 ']' 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84657 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84657 ']' 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84657 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84657 00:22:37.105 killing process with pid 84657 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.105 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84657' 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84657 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84657 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:37.106 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.H6yxRCG5gN /tmp/tmp.JLkpQfrZbN /tmp/tmp.QttsGYeAgw 00:22:37.365 00:22:37.365 real 1m24.025s 00:22:37.365 user 2m19.414s 00:22:37.365 sys 0m26.677s 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.365 ************************************ 00:22:37.365 END TEST nvmf_tls 00:22:37.365 ************************************ 00:22:37.365 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.625 ************************************ 00:22:37.625 START TEST nvmf_fips 00:22:37.625 ************************************ 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:37.625 * Looking for test storage... 00:22:37.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.625 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.625 --rc genhtml_branch_coverage=1 00:22:37.625 --rc genhtml_function_coverage=1 00:22:37.625 --rc genhtml_legend=1 00:22:37.625 --rc geninfo_all_blocks=1 00:22:37.625 --rc geninfo_unexecuted_blocks=1 00:22:37.625 00:22:37.625 ' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.625 --rc genhtml_branch_coverage=1 00:22:37.625 --rc genhtml_function_coverage=1 00:22:37.625 --rc genhtml_legend=1 00:22:37.625 --rc geninfo_all_blocks=1 00:22:37.625 --rc geninfo_unexecuted_blocks=1 00:22:37.625 00:22:37.625 ' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.625 --rc genhtml_branch_coverage=1 00:22:37.625 --rc genhtml_function_coverage=1 00:22:37.625 --rc genhtml_legend=1 00:22:37.625 --rc geninfo_all_blocks=1 00:22:37.625 --rc geninfo_unexecuted_blocks=1 00:22:37.625 00:22:37.625 ' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.625 --rc genhtml_branch_coverage=1 00:22:37.625 --rc genhtml_function_coverage=1 00:22:37.625 --rc genhtml_legend=1 00:22:37.625 --rc geninfo_all_blocks=1 00:22:37.625 --rc geninfo_unexecuted_blocks=1 00:22:37.625 00:22:37.625 ' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.625 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.626 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:37.626 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.885 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:37.886 Error setting digest 00:22:37.886 4062BEA9DE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:37.886 4062BEA9DE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:37.886 Cannot find device "nvmf_init_br" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:37.886 Cannot find device "nvmf_init_br2" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:37.886 Cannot find device "nvmf_tgt_br" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.886 Cannot find device "nvmf_tgt_br2" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:37.886 Cannot find device "nvmf_init_br" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:37.886 Cannot find device "nvmf_init_br2" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:37.886 Cannot find device "nvmf_tgt_br" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:37.886 Cannot find device "nvmf_tgt_br2" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:37.886 Cannot find device "nvmf_br" 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:22:37.886 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:38.145 Cannot find device "nvmf_init_if" 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:38.145 Cannot find device "nvmf_init_if2" 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:38.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:38.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:38.145 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:38.145 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:22:38.145 00:22:38.145 --- 10.0.0.3 ping statistics --- 00:22:38.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.145 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:38.145 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:38.404 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:38.404 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:22:38.404 00:22:38.404 --- 10.0.0.4 ping statistics --- 00:22:38.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.404 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:38.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:38.405 00:22:38.405 --- 10.0.0.1 ping statistics --- 00:22:38.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.405 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:38.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:22:38.405 00:22:38.405 --- 10.0.0.2 ping statistics --- 00:22:38.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.405 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85042 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85042 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85042 ']' 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.405 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.405 [2024-12-06 13:16:44.813369] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:38.405 [2024-12-06 13:16:44.814156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.664 [2024-12-06 13:16:44.975890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.664 [2024-12-06 13:16:45.031893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.664 [2024-12-06 13:16:45.031986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.664 [2024-12-06 13:16:45.032007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.664 [2024-12-06 13:16:45.032022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.664 [2024-12-06 13:16:45.032036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.664 [2024-12-06 13:16:45.032504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.xAT 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.xAT 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.xAT 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.xAT 00:22:38.664 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:39.229 [2024-12-06 13:16:45.513722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.229 [2024-12-06 13:16:45.529678] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.229 [2024-12-06 13:16:45.529916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:39.229 malloc0 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85088 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85088 /var/tmp/bdevperf.sock 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85088 ']' 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.229 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:39.229 [2024-12-06 13:16:45.690125] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:39.229 [2024-12-06 13:16:45.690236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85088 ] 00:22:39.488 [2024-12-06 13:16:45.838117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.488 [2024-12-06 13:16:45.874741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.488 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.488 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:39.488 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.xAT 00:22:39.746 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.006 [2024-12-06 13:16:46.526441] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.264 TLSTESTn1 00:22:40.264 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.264 Running I/O for 10 seconds... 00:22:42.573 3832.00 IOPS, 14.97 MiB/s [2024-12-06T13:16:50.035Z] 3832.00 IOPS, 14.97 MiB/s [2024-12-06T13:16:50.969Z] 3871.00 IOPS, 15.12 MiB/s [2024-12-06T13:16:51.903Z] 3903.25 IOPS, 15.25 MiB/s [2024-12-06T13:16:52.844Z] 3912.00 IOPS, 15.28 MiB/s [2024-12-06T13:16:53.780Z] 3907.67 IOPS, 15.26 MiB/s [2024-12-06T13:16:55.157Z] 3855.86 IOPS, 15.06 MiB/s [2024-12-06T13:16:56.091Z] 3832.25 IOPS, 14.97 MiB/s [2024-12-06T13:16:57.021Z] 3842.56 IOPS, 15.01 MiB/s [2024-12-06T13:16:57.021Z] 3824.30 IOPS, 14.94 MiB/s 00:22:50.493 Latency(us) 00:22:50.493 [2024-12-06T13:16:57.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.493 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.493 Verification LBA range: start 0x0 length 0x2000 00:22:50.493 TLSTESTn1 : 10.02 3829.87 14.96 0.00 0.00 33355.59 6404.65 31457.28 00:22:50.493 [2024-12-06T13:16:57.021Z] =================================================================================================================== 00:22:50.493 [2024-12-06T13:16:57.021Z] Total : 3829.87 14.96 0.00 0.00 33355.59 6404.65 31457.28 00:22:50.493 { 00:22:50.493 "results": [ 00:22:50.493 { 00:22:50.493 "job": "TLSTESTn1", 00:22:50.493 "core_mask": "0x4", 00:22:50.493 "workload": "verify", 00:22:50.493 "status": "finished", 00:22:50.493 "verify_range": { 00:22:50.493 "start": 0, 00:22:50.493 "length": 8192 00:22:50.493 }, 00:22:50.493 "queue_depth": 128, 00:22:50.493 "io_size": 4096, 00:22:50.493 "runtime": 10.018359, 00:22:50.493 "iops": 3829.86874397294, 00:22:50.493 "mibps": 14.960424781144297, 00:22:50.493 "io_failed": 0, 00:22:50.493 "io_timeout": 0, 00:22:50.493 "avg_latency_us": 33355.59290430959, 00:22:50.493 "min_latency_us": 6404.654545454546, 00:22:50.493 "max_latency_us": 31457.28 00:22:50.493 } 00:22:50.493 ], 00:22:50.493 "core_count": 1 00:22:50.493 } 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:50.493 nvmf_trace.0 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85088 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85088 ']' 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85088 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85088 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:50.493 killing process with pid 85088 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85088' 00:22:50.493 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85088 00:22:50.493 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.493 00:22:50.493 Latency(us) 00:22:50.493 [2024-12-06T13:16:57.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.493 [2024-12-06T13:16:57.022Z] =================================================================================================================== 00:22:50.494 [2024-12-06T13:16:57.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.494 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85088 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.763 rmmod nvme_tcp 00:22:50.763 rmmod nvme_fabrics 00:22:50.763 rmmod nvme_keyring 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85042 ']' 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85042 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85042 ']' 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85042 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85042 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:50.763 killing process with pid 85042 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85042' 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85042 00:22:50.763 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85042 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.021 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.xAT 00:22:51.279 ************************************ 00:22:51.279 END TEST nvmf_fips 00:22:51.279 ************************************ 00:22:51.279 00:22:51.279 real 0m13.672s 00:22:51.279 user 0m18.766s 00:22:51.279 sys 0m5.648s 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.279 ************************************ 00:22:51.279 START TEST nvmf_control_msg_list 00:22:51.279 ************************************ 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:51.279 * Looking for test storage... 00:22:51.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:22:51.279 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:51.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.537 --rc genhtml_branch_coverage=1 00:22:51.537 --rc genhtml_function_coverage=1 00:22:51.537 --rc genhtml_legend=1 00:22:51.537 --rc geninfo_all_blocks=1 00:22:51.537 --rc geninfo_unexecuted_blocks=1 00:22:51.537 00:22:51.537 ' 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:51.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.537 --rc genhtml_branch_coverage=1 00:22:51.537 --rc genhtml_function_coverage=1 00:22:51.537 --rc genhtml_legend=1 00:22:51.537 --rc geninfo_all_blocks=1 00:22:51.537 --rc geninfo_unexecuted_blocks=1 00:22:51.537 00:22:51.537 ' 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:51.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.537 --rc genhtml_branch_coverage=1 00:22:51.537 --rc genhtml_function_coverage=1 00:22:51.537 --rc genhtml_legend=1 00:22:51.537 --rc geninfo_all_blocks=1 00:22:51.537 --rc geninfo_unexecuted_blocks=1 00:22:51.537 00:22:51.537 ' 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:51.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.537 --rc genhtml_branch_coverage=1 00:22:51.537 --rc genhtml_function_coverage=1 00:22:51.537 --rc genhtml_legend=1 00:22:51.537 --rc geninfo_all_blocks=1 00:22:51.537 --rc geninfo_unexecuted_blocks=1 00:22:51.537 00:22:51.537 ' 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.537 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.538 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:51.538 Cannot find device "nvmf_init_br" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:51.538 Cannot find device "nvmf_init_br2" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:51.538 Cannot find device "nvmf_tgt_br" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.538 Cannot find device "nvmf_tgt_br2" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:51.538 Cannot find device "nvmf_init_br" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:51.538 Cannot find device "nvmf_init_br2" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:51.538 Cannot find device "nvmf_tgt_br" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:51.538 Cannot find device "nvmf_tgt_br2" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:51.538 Cannot find device "nvmf_br" 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:22:51.538 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:51.538 Cannot find device "nvmf_init_if" 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:51.538 Cannot find device "nvmf_init_if2" 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:51.538 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:51.797 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:51.797 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:22:51.797 00:22:51.797 --- 10.0.0.3 ping statistics --- 00:22:51.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.797 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:51.797 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:51.797 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:22:51.797 00:22:51.797 --- 10.0.0.4 ping statistics --- 00:22:51.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.797 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:51.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:22:51.797 00:22:51.797 --- 10.0.0.1 ping statistics --- 00:22:51.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.797 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:51.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:22:51.797 00:22:51.797 --- 10.0.0.2 ping statistics --- 00:22:51.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.797 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85495 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85495 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85495 ']' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.797 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.798 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:52.055 [2024-12-06 13:16:58.341988] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:52.055 [2024-12-06 13:16:58.342080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.055 [2024-12-06 13:16:58.529303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.056 [2024-12-06 13:16:58.561344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.056 [2024-12-06 13:16:58.561401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.056 [2024-12-06 13:16:58.561413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.056 [2024-12-06 13:16:58.561421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.056 [2024-12-06 13:16:58.561429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.056 [2024-12-06 13:16:58.561753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 [2024-12-06 13:16:59.420692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 Malloc0 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 [2024-12-06 13:16:59.455630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85541 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85542 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85544 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:52.990 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85541 00:22:53.249 [2024-12-06 13:16:59.644245] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:53.249 [2024-12-06 13:16:59.644517] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:53.249 [2024-12-06 13:16:59.654182] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:54.200 Initializing NVMe Controllers 00:22:54.200 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:54.200 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:54.200 Initialization complete. Launching workers. 00:22:54.200 ======================================================== 00:22:54.200 Latency(us) 00:22:54.200 Device Information : IOPS MiB/s Average min max 00:22:54.200 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3245.92 12.68 307.77 131.12 889.06 00:22:54.200 ======================================================== 00:22:54.200 Total : 3245.92 12.68 307.77 131.12 889.06 00:22:54.200 00:22:54.200 Initializing NVMe Controllers 00:22:54.200 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:54.200 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:54.201 Initialization complete. Launching workers. 00:22:54.201 ======================================================== 00:22:54.201 Latency(us) 00:22:54.201 Device Information : IOPS MiB/s Average min max 00:22:54.201 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3226.00 12.60 309.65 167.83 900.69 00:22:54.201 ======================================================== 00:22:54.201 Total : 3226.00 12.60 309.65 167.83 900.69 00:22:54.201 00:22:54.201 Initializing NVMe Controllers 00:22:54.201 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:54.201 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:54.201 Initialization complete. Launching workers. 00:22:54.201 ======================================================== 00:22:54.201 Latency(us) 00:22:54.201 Device Information : IOPS MiB/s Average min max 00:22:54.201 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3247.00 12.68 307.47 137.78 870.99 00:22:54.201 ======================================================== 00:22:54.201 Total : 3247.00 12.68 307.47 137.78 870.99 00:22:54.201 00:22:54.201 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85542 00:22:54.201 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85544 00:22:54.201 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:54.201 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:54.201 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.201 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.464 rmmod nvme_tcp 00:22:54.464 rmmod nvme_fabrics 00:22:54.464 rmmod nvme_keyring 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85495 ']' 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85495 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85495 ']' 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85495 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85495 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.464 killing process with pid 85495 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85495' 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85495 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85495 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:54.464 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:54.722 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:22:54.722 00:22:54.722 real 0m3.586s 00:22:54.722 user 0m5.697s 00:22:54.722 sys 0m1.321s 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.722 ************************************ 00:22:54.722 END TEST nvmf_control_msg_list 00:22:54.722 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:54.722 ************************************ 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:54.981 ************************************ 00:22:54.981 START TEST nvmf_wait_for_buf 00:22:54.981 ************************************ 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:54.981 * Looking for test storage... 00:22:54.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.981 --rc genhtml_branch_coverage=1 00:22:54.981 --rc genhtml_function_coverage=1 00:22:54.981 --rc genhtml_legend=1 00:22:54.981 --rc geninfo_all_blocks=1 00:22:54.981 --rc geninfo_unexecuted_blocks=1 00:22:54.981 00:22:54.981 ' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.981 --rc genhtml_branch_coverage=1 00:22:54.981 --rc genhtml_function_coverage=1 00:22:54.981 --rc genhtml_legend=1 00:22:54.981 --rc geninfo_all_blocks=1 00:22:54.981 --rc geninfo_unexecuted_blocks=1 00:22:54.981 00:22:54.981 ' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.981 --rc genhtml_branch_coverage=1 00:22:54.981 --rc genhtml_function_coverage=1 00:22:54.981 --rc genhtml_legend=1 00:22:54.981 --rc geninfo_all_blocks=1 00:22:54.981 --rc geninfo_unexecuted_blocks=1 00:22:54.981 00:22:54.981 ' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.981 --rc genhtml_branch_coverage=1 00:22:54.981 --rc genhtml_function_coverage=1 00:22:54.981 --rc genhtml_legend=1 00:22:54.981 --rc geninfo_all_blocks=1 00:22:54.981 --rc geninfo_unexecuted_blocks=1 00:22:54.981 00:22:54.981 ' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.981 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:54.982 Cannot find device "nvmf_init_br" 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:54.982 Cannot find device "nvmf_init_br2" 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:54.982 Cannot find device "nvmf_tgt_br" 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.982 Cannot find device "nvmf_tgt_br2" 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:54.982 Cannot find device "nvmf_init_br" 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:22:54.982 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:55.240 Cannot find device "nvmf_init_br2" 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:55.240 Cannot find device "nvmf_tgt_br" 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:55.240 Cannot find device "nvmf_tgt_br2" 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:55.240 Cannot find device "nvmf_br" 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:55.240 Cannot find device "nvmf_init_if" 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:55.240 Cannot find device "nvmf_init_if2" 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:55.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:55.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:55.240 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:55.241 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:55.241 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:55.241 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:55.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:55.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:22:55.499 00:22:55.499 --- 10.0.0.3 ping statistics --- 00:22:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.499 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:55.499 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:55.499 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:22:55.499 00:22:55.499 --- 10.0.0.4 ping statistics --- 00:22:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.499 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:55.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:22:55.499 00:22:55.499 --- 10.0.0.1 ping statistics --- 00:22:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.499 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:55.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:22:55.499 00:22:55.499 --- 10.0.0.2 ping statistics --- 00:22:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.499 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85781 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85781 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85781 ']' 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.499 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.499 [2024-12-06 13:17:01.891172] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:55.499 [2024-12-06 13:17:01.891261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.758 [2024-12-06 13:17:02.042483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.758 [2024-12-06 13:17:02.080768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.758 [2024-12-06 13:17:02.080845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.758 [2024-12-06 13:17:02.080859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.758 [2024-12-06 13:17:02.080869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.758 [2024-12-06 13:17:02.080877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.758 [2024-12-06 13:17:02.081241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 Malloc0 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 [2024-12-06 13:17:02.246902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:55.758 [2024-12-06 13:17:02.271020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.758 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:56.017 [2024-12-06 13:17:02.482105] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:57.392 Initializing NVMe Controllers 00:22:57.392 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:57.392 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:57.392 Initialization complete. Launching workers. 00:22:57.392 ======================================================== 00:22:57.392 Latency(us) 00:22:57.392 Device Information : IOPS MiB/s Average min max 00:22:57.392 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.00 16.00 32655.68 8013.45 66958.54 00:22:57.392 ======================================================== 00:22:57.392 Total : 128.00 16.00 32655.68 8013.45 66958.54 00:22:57.392 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.392 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:57.651 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.651 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:57.651 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.651 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.651 rmmod nvme_tcp 00:22:57.651 rmmod nvme_fabrics 00:22:57.651 rmmod nvme_keyring 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85781 ']' 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85781 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85781 ']' 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85781 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85781 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.651 killing process with pid 85781 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85781' 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85781 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85781 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.651 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.909 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:22:57.909 00:22:57.909 real 0m3.164s 00:22:57.909 user 0m2.600s 00:22:57.909 sys 0m0.665s 00:22:57.910 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.910 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:57.910 ************************************ 00:22:57.910 END TEST nvmf_wait_for_buf 00:22:57.910 ************************************ 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:58.169 ************************************ 00:22:58.169 START TEST nvmf_nsid 00:22:58.169 ************************************ 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:58.169 * Looking for test storage... 00:22:58.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:58.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.169 --rc genhtml_branch_coverage=1 00:22:58.169 --rc genhtml_function_coverage=1 00:22:58.169 --rc genhtml_legend=1 00:22:58.169 --rc geninfo_all_blocks=1 00:22:58.169 --rc geninfo_unexecuted_blocks=1 00:22:58.169 00:22:58.169 ' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:58.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.169 --rc genhtml_branch_coverage=1 00:22:58.169 --rc genhtml_function_coverage=1 00:22:58.169 --rc genhtml_legend=1 00:22:58.169 --rc geninfo_all_blocks=1 00:22:58.169 --rc geninfo_unexecuted_blocks=1 00:22:58.169 00:22:58.169 ' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:58.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.169 --rc genhtml_branch_coverage=1 00:22:58.169 --rc genhtml_function_coverage=1 00:22:58.169 --rc genhtml_legend=1 00:22:58.169 --rc geninfo_all_blocks=1 00:22:58.169 --rc geninfo_unexecuted_blocks=1 00:22:58.169 00:22:58.169 ' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:58.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.169 --rc genhtml_branch_coverage=1 00:22:58.169 --rc genhtml_function_coverage=1 00:22:58.169 --rc genhtml_legend=1 00:22:58.169 --rc geninfo_all_blocks=1 00:22:58.169 --rc geninfo_unexecuted_blocks=1 00:22:58.169 00:22:58.169 ' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.169 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:58.170 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:58.170 Cannot find device "nvmf_init_br" 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:22:58.170 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:58.170 Cannot find device "nvmf_init_br2" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:58.428 Cannot find device "nvmf_tgt_br" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:58.428 Cannot find device "nvmf_tgt_br2" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:58.428 Cannot find device "nvmf_init_br" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:58.428 Cannot find device "nvmf_init_br2" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:58.428 Cannot find device "nvmf_tgt_br" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:58.428 Cannot find device "nvmf_tgt_br2" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:58.428 Cannot find device "nvmf_br" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:58.428 Cannot find device "nvmf_init_if" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:58.428 Cannot find device "nvmf_init_if2" 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:58.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:58.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:58.428 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:58.687 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:58.687 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:58.687 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:58.687 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:58.687 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:22:58.687 00:22:58.687 --- 10.0.0.3 ping statistics --- 00:22:58.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.687 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:58.687 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:58.687 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:22:58.687 00:22:58.687 --- 10.0.0.4 ping statistics --- 00:22:58.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.687 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:58.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:58.687 00:22:58.687 --- 10.0.0.1 ping statistics --- 00:22:58.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.687 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:58.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:22:58.687 00:22:58.687 --- 10.0.0.2 ping statistics --- 00:22:58.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.687 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=86053 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 86053 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86053 ']' 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.687 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.687 [2024-12-06 13:17:05.155194] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:58.687 [2024-12-06 13:17:05.155912] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.944 [2024-12-06 13:17:05.306490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.944 [2024-12-06 13:17:05.345137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.944 [2024-12-06 13:17:05.345225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.944 [2024-12-06 13:17:05.345247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.944 [2024-12-06 13:17:05.345262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.944 [2024-12-06 13:17:05.345276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.944 [2024-12-06 13:17:05.345766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.944 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.944 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:58.944 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.944 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.944 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86082 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c07b4332-e88f-45a6-b723-e1d70fe4d0af 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=274f1957-cfb9-408f-9da2-233f0fb32eec 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ac90f8d6-5a5f-4bc8-906c-06ffb32151e0 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:59.201 null0 00:22:59.201 null1 00:22:59.201 null2 00:22:59.201 [2024-12-06 13:17:05.542377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.201 [2024-12-06 13:17:05.560050] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:59.201 [2024-12-06 13:17:05.560143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86082 ] 00:22:59.201 [2024-12-06 13:17:05.566540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86082 /var/tmp/tgt2.sock 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86082 ']' 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.201 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:59.201 [2024-12-06 13:17:05.708435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.459 [2024-12-06 13:17:05.747021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.459 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.459 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:59.459 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:00.025 [2024-12-06 13:17:06.389005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.025 [2024-12-06 13:17:06.405213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:00.025 nvme0n1 nvme0n2 00:23:00.025 nvme1n1 00:23:00.025 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:00.025 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:00.025 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:00.283 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c07b4332-e88f-45a6-b723-e1d70fe4d0af 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c07b4332e88f45a6b723e1d70fe4d0af 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C07B4332E88F45A6B723E1D70FE4D0AF 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C07B4332E88F45A6B723E1D70FE4D0AF == \C\0\7\B\4\3\3\2\E\8\8\F\4\5\A\6\B\7\2\3\E\1\D\7\0\F\E\4\D\0\A\F ]] 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 274f1957-cfb9-408f-9da2-233f0fb32eec 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:01.220 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=274f1957cfb9408f9da2233f0fb32eec 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 274F1957CFB9408F9DA2233F0FB32EEC 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 274F1957CFB9408F9DA2233F0FB32EEC == \2\7\4\F\1\9\5\7\C\F\B\9\4\0\8\F\9\D\A\2\2\3\3\F\0\F\B\3\2\E\E\C ]] 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ac90f8d6-5a5f-4bc8-906c-06ffb32151e0 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ac90f8d65a5f4bc8906c06ffb32151e0 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AC90F8D65A5F4BC8906C06FFB32151E0 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ AC90F8D65A5F4BC8906C06FFB32151E0 == \A\C\9\0\F\8\D\6\5\A\5\F\4\B\C\8\9\0\6\C\0\6\F\F\B\3\2\1\5\1\E\0 ]] 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86082 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86082 ']' 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86082 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.479 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86082 00:23:01.738 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.738 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.738 killing process with pid 86082 00:23:01.738 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86082' 00:23:01.738 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86082 00:23:01.738 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86082 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.998 rmmod nvme_tcp 00:23:01.998 rmmod nvme_fabrics 00:23:01.998 rmmod nvme_keyring 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 86053 ']' 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 86053 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86053 ']' 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86053 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86053 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.998 killing process with pid 86053 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86053' 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86053 00:23:01.998 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86053 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:02.258 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:23:02.518 ************************************ 00:23:02.518 END TEST nvmf_nsid 00:23:02.518 ************************************ 00:23:02.518 00:23:02.518 real 0m4.360s 00:23:02.518 user 0m6.970s 00:23:02.518 sys 0m1.186s 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:02.518 00:23:02.518 real 7m22.912s 00:23:02.518 user 17m56.709s 00:23:02.518 sys 1m23.560s 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.518 13:17:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.518 ************************************ 00:23:02.518 END TEST nvmf_target_extra 00:23:02.518 ************************************ 00:23:02.518 13:17:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:02.518 13:17:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:02.518 13:17:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.518 13:17:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.518 ************************************ 00:23:02.518 START TEST nvmf_host 00:23:02.518 ************************************ 00:23:02.518 13:17:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:02.518 * Looking for test storage... 00:23:02.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:02.518 13:17:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.518 13:17:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.518 13:17:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.776 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:02.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.776 --rc genhtml_branch_coverage=1 00:23:02.776 --rc genhtml_function_coverage=1 00:23:02.776 --rc genhtml_legend=1 00:23:02.776 --rc geninfo_all_blocks=1 00:23:02.777 --rc geninfo_unexecuted_blocks=1 00:23:02.777 00:23:02.777 ' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.777 --rc genhtml_branch_coverage=1 00:23:02.777 --rc genhtml_function_coverage=1 00:23:02.777 --rc genhtml_legend=1 00:23:02.777 --rc geninfo_all_blocks=1 00:23:02.777 --rc geninfo_unexecuted_blocks=1 00:23:02.777 00:23:02.777 ' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.777 --rc genhtml_branch_coverage=1 00:23:02.777 --rc genhtml_function_coverage=1 00:23:02.777 --rc genhtml_legend=1 00:23:02.777 --rc geninfo_all_blocks=1 00:23:02.777 --rc geninfo_unexecuted_blocks=1 00:23:02.777 00:23:02.777 ' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.777 --rc genhtml_branch_coverage=1 00:23:02.777 --rc genhtml_function_coverage=1 00:23:02.777 --rc genhtml_legend=1 00:23:02.777 --rc geninfo_all_blocks=1 00:23:02.777 --rc geninfo_unexecuted_blocks=1 00:23:02.777 00:23:02.777 ' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.777 ************************************ 00:23:02.777 START TEST nvmf_multicontroller 00:23:02.777 ************************************ 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:02.777 * Looking for test storage... 00:23:02.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.777 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.036 --rc genhtml_branch_coverage=1 00:23:03.036 --rc genhtml_function_coverage=1 00:23:03.036 --rc genhtml_legend=1 00:23:03.036 --rc geninfo_all_blocks=1 00:23:03.036 --rc geninfo_unexecuted_blocks=1 00:23:03.036 00:23:03.036 ' 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.036 --rc genhtml_branch_coverage=1 00:23:03.036 --rc genhtml_function_coverage=1 00:23:03.036 --rc genhtml_legend=1 00:23:03.036 --rc geninfo_all_blocks=1 00:23:03.036 --rc geninfo_unexecuted_blocks=1 00:23:03.036 00:23:03.036 ' 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.036 --rc genhtml_branch_coverage=1 00:23:03.036 --rc genhtml_function_coverage=1 00:23:03.036 --rc genhtml_legend=1 00:23:03.036 --rc geninfo_all_blocks=1 00:23:03.036 --rc geninfo_unexecuted_blocks=1 00:23:03.036 00:23:03.036 ' 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.036 --rc genhtml_branch_coverage=1 00:23:03.036 --rc genhtml_function_coverage=1 00:23:03.036 --rc genhtml_legend=1 00:23:03.036 --rc geninfo_all_blocks=1 00:23:03.036 --rc geninfo_unexecuted_blocks=1 00:23:03.036 00:23:03.036 ' 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.036 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:03.037 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:03.037 Cannot find device "nvmf_init_br" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:03.037 Cannot find device "nvmf_init_br2" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:03.037 Cannot find device "nvmf_tgt_br" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:03.037 Cannot find device "nvmf_tgt_br2" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:03.037 Cannot find device "nvmf_init_br" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:03.037 Cannot find device "nvmf_init_br2" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:03.037 Cannot find device "nvmf_tgt_br" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:03.037 Cannot find device "nvmf_tgt_br2" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:03.037 Cannot find device "nvmf_br" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:03.037 Cannot find device "nvmf_init_if" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:03.037 Cannot find device "nvmf_init_if2" 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:03.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:03.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:03.037 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:03.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:03.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:23:03.296 00:23:03.296 --- 10.0.0.3 ping statistics --- 00:23:03.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.296 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:03.296 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:03.296 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:23:03.296 00:23:03.296 --- 10.0.0.4 ping statistics --- 00:23:03.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.296 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:03.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:03.296 00:23:03.296 --- 10.0.0.1 ping statistics --- 00:23:03.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.296 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:03.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:23:03.296 00:23:03.296 --- 10.0.0.2 ping statistics --- 00:23:03.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.296 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86447 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86447 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86447 ']' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.296 13:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.296 [2024-12-06 13:17:09.820641] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:03.297 [2024-12-06 13:17:09.821261] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.555 [2024-12-06 13:17:09.966224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:03.555 [2024-12-06 13:17:10.005838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.555 [2024-12-06 13:17:10.005909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.555 [2024-12-06 13:17:10.005922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.555 [2024-12-06 13:17:10.005931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.555 [2024-12-06 13:17:10.005938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.555 [2024-12-06 13:17:10.006746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.555 [2024-12-06 13:17:10.007293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.555 [2024-12-06 13:17:10.007311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 [2024-12-06 13:17:10.146711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 Malloc0 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 [2024-12-06 13:17:10.198926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 [2024-12-06 13:17:10.206878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 Malloc1 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.813 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86486 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86486 /var/tmp/bdevperf.sock 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86486 ']' 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.814 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.381 NVMe0n1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.381 1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.381 2024/12/06 13:17:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:04.381 request: 00:23:04.381 { 00:23:04.381 "method": "bdev_nvme_attach_controller", 00:23:04.381 "params": { 00:23:04.381 "name": "NVMe0", 00:23:04.381 "trtype": "tcp", 00:23:04.381 "traddr": "10.0.0.3", 00:23:04.381 "adrfam": "ipv4", 00:23:04.381 "trsvcid": "4420", 00:23:04.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.381 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:04.381 "hostaddr": "10.0.0.1", 00:23:04.381 "prchk_reftag": false, 00:23:04.381 "prchk_guard": false, 00:23:04.381 "hdgst": false, 00:23:04.381 "ddgst": false, 00:23:04.381 "allow_unrecognized_csi": false 00:23:04.381 } 00:23:04.381 } 00:23:04.381 Got JSON-RPC error response 00:23:04.381 GoRPCClient: error on JSON-RPC call 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.381 2024/12/06 13:17:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:04.381 request: 00:23:04.381 { 00:23:04.381 "method": "bdev_nvme_attach_controller", 00:23:04.381 "params": { 00:23:04.381 "name": "NVMe0", 00:23:04.381 "trtype": "tcp", 00:23:04.381 "traddr": "10.0.0.3", 00:23:04.381 "adrfam": "ipv4", 00:23:04.381 "trsvcid": "4420", 00:23:04.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:04.381 "hostaddr": "10.0.0.1", 00:23:04.381 "prchk_reftag": false, 00:23:04.381 "prchk_guard": false, 00:23:04.381 "hdgst": false, 00:23:04.381 "ddgst": false, 00:23:04.381 "allow_unrecognized_csi": false 00:23:04.381 } 00:23:04.381 } 00:23:04.381 Got JSON-RPC error response 00:23:04.381 GoRPCClient: error on JSON-RPC call 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.381 2024/12/06 13:17:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:23:04.381 request: 00:23:04.381 { 00:23:04.381 "method": "bdev_nvme_attach_controller", 00:23:04.381 "params": { 00:23:04.381 "name": "NVMe0", 00:23:04.381 "trtype": "tcp", 00:23:04.381 "traddr": "10.0.0.3", 00:23:04.381 "adrfam": "ipv4", 00:23:04.381 "trsvcid": "4420", 00:23:04.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.381 "hostaddr": "10.0.0.1", 00:23:04.381 "prchk_reftag": false, 00:23:04.381 "prchk_guard": false, 00:23:04.381 "hdgst": false, 00:23:04.381 "ddgst": false, 00:23:04.381 "multipath": "disable", 00:23:04.381 "allow_unrecognized_csi": false 00:23:04.381 } 00:23:04.381 } 00:23:04.381 Got JSON-RPC error response 00:23:04.381 GoRPCClient: error on JSON-RPC call 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:04.381 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.382 2024/12/06 13:17:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:04.382 request: 00:23:04.382 { 00:23:04.382 "method": "bdev_nvme_attach_controller", 00:23:04.382 "params": { 00:23:04.382 "name": "NVMe0", 00:23:04.382 "trtype": "tcp", 00:23:04.382 "traddr": "10.0.0.3", 00:23:04.382 "adrfam": "ipv4", 00:23:04.382 "trsvcid": "4420", 00:23:04.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.382 "hostaddr": "10.0.0.1", 00:23:04.382 "prchk_reftag": false, 00:23:04.382 "prchk_guard": false, 00:23:04.382 "hdgst": false, 00:23:04.382 "ddgst": false, 00:23:04.382 "multipath": "failover", 00:23:04.382 "allow_unrecognized_csi": false 00:23:04.382 } 00:23:04.382 } 00:23:04.382 Got JSON-RPC error response 00:23:04.382 GoRPCClient: error on JSON-RPC call 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.382 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.640 NVMe0n1 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.640 13:17:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.640 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:04.640 13:17:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.017 { 00:23:06.017 "results": [ 00:23:06.017 { 00:23:06.017 "job": "NVMe0n1", 00:23:06.017 "core_mask": "0x1", 00:23:06.017 "workload": "write", 00:23:06.017 "status": "finished", 00:23:06.017 "queue_depth": 128, 00:23:06.017 "io_size": 4096, 00:23:06.017 "runtime": 1.005173, 00:23:06.017 "iops": 16688.66951261126, 00:23:06.017 "mibps": 65.19011528363774, 00:23:06.017 "io_failed": 0, 00:23:06.017 "io_timeout": 0, 00:23:06.017 "avg_latency_us": 7654.96084844872, 00:23:06.017 "min_latency_us": 2904.4363636363637, 00:23:06.017 "max_latency_us": 19541.643636363635 00:23:06.017 } 00:23:06.017 ], 00:23:06.017 "core_count": 1 00:23:06.017 } 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.017 nvme1n1 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.017 nvme1n1 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86486 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86486 ']' 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86486 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86486 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86486' 00:23:06.017 killing process with pid 86486 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86486 00:23:06.017 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86486 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:06.276 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:06.276 [2024-12-06 13:17:10.319608] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:06.276 [2024-12-06 13:17:10.319760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86486 ] 00:23:06.276 [2024-12-06 13:17:10.465132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.276 [2024-12-06 13:17:10.504329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.276 [2024-12-06 13:17:10.998653] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 7124f0f6-8826-43fd-b6ea-d3e2a321ac9e already exists 00:23:06.276 [2024-12-06 13:17:10.998727] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:7124f0f6-8826-43fd-b6ea-d3e2a321ac9e alias for bdev NVMe1n1 00:23:06.276 [2024-12-06 13:17:10.998746] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:06.276 Running I/O for 1 seconds... 00:23:06.276 16647.00 IOPS, 65.03 MiB/s 00:23:06.276 Latency(us) 00:23:06.276 [2024-12-06T13:17:12.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.276 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:06.276 NVMe0n1 : 1.01 16688.67 65.19 0.00 0.00 7654.96 2904.44 19541.64 00:23:06.276 [2024-12-06T13:17:12.804Z] =================================================================================================================== 00:23:06.276 [2024-12-06T13:17:12.804Z] Total : 16688.67 65.19 0.00 0.00 7654.96 2904.44 19541.64 00:23:06.276 Received shutdown signal, test time was about 1.000000 seconds 00:23:06.276 00:23:06.276 Latency(us) 00:23:06.276 [2024-12-06T13:17:12.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.276 [2024-12-06T13:17:12.804Z] =================================================================================================================== 00:23:06.276 [2024-12-06T13:17:12.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.276 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.276 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.276 rmmod nvme_tcp 00:23:06.276 rmmod nvme_fabrics 00:23:06.276 rmmod nvme_keyring 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86447 ']' 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86447 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86447 ']' 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86447 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86447 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:06.535 killing process with pid 86447 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86447' 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86447 00:23:06.535 13:17:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86447 00:23:06.792 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.792 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:06.792 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:06.792 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:06.792 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:06.792 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:06.792 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:23:06.793 ************************************ 00:23:06.793 END TEST nvmf_multicontroller 00:23:06.793 ************************************ 00:23:06.793 00:23:06.793 real 0m4.175s 00:23:06.793 user 0m11.870s 00:23:06.793 sys 0m1.126s 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.793 13:17:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.052 ************************************ 00:23:07.052 START TEST nvmf_aer 00:23:07.052 ************************************ 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.052 * Looking for test storage... 00:23:07.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.052 --rc genhtml_branch_coverage=1 00:23:07.052 --rc genhtml_function_coverage=1 00:23:07.052 --rc genhtml_legend=1 00:23:07.052 --rc geninfo_all_blocks=1 00:23:07.052 --rc geninfo_unexecuted_blocks=1 00:23:07.052 00:23:07.052 ' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.052 --rc genhtml_branch_coverage=1 00:23:07.052 --rc genhtml_function_coverage=1 00:23:07.052 --rc genhtml_legend=1 00:23:07.052 --rc geninfo_all_blocks=1 00:23:07.052 --rc geninfo_unexecuted_blocks=1 00:23:07.052 00:23:07.052 ' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.052 --rc genhtml_branch_coverage=1 00:23:07.052 --rc genhtml_function_coverage=1 00:23:07.052 --rc genhtml_legend=1 00:23:07.052 --rc geninfo_all_blocks=1 00:23:07.052 --rc geninfo_unexecuted_blocks=1 00:23:07.052 00:23:07.052 ' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.052 --rc genhtml_branch_coverage=1 00:23:07.052 --rc genhtml_function_coverage=1 00:23:07.052 --rc genhtml_legend=1 00:23:07.052 --rc geninfo_all_blocks=1 00:23:07.052 --rc geninfo_unexecuted_blocks=1 00:23:07.052 00:23:07.052 ' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.052 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:07.053 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:07.311 Cannot find device "nvmf_init_br" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:07.311 Cannot find device "nvmf_init_br2" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:07.311 Cannot find device "nvmf_tgt_br" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:07.311 Cannot find device "nvmf_tgt_br2" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:07.311 Cannot find device "nvmf_init_br" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:07.311 Cannot find device "nvmf_init_br2" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:07.311 Cannot find device "nvmf_tgt_br" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:07.311 Cannot find device "nvmf_tgt_br2" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:07.311 Cannot find device "nvmf_br" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:07.311 Cannot find device "nvmf_init_if" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:07.311 Cannot find device "nvmf_init_if2" 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:07.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:07.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:07.311 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:07.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:23:07.569 00:23:07.569 --- 10.0.0.3 ping statistics --- 00:23:07.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.569 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:07.569 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:07.569 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:23:07.569 00:23:07.569 --- 10.0.0.4 ping statistics --- 00:23:07.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.569 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:07.569 00:23:07.569 --- 10.0.0.1 ping statistics --- 00:23:07.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.569 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:07.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:23:07.569 00:23:07.569 --- 10.0.0.2 ping statistics --- 00:23:07.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.569 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=86779 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 86779 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 86779 ']' 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.569 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.570 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.570 13:17:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.570 [2024-12-06 13:17:14.021329] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:07.570 [2024-12-06 13:17:14.021459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.827 [2024-12-06 13:17:14.178549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.827 [2024-12-06 13:17:14.223147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.827 [2024-12-06 13:17:14.223222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.827 [2024-12-06 13:17:14.223239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.827 [2024-12-06 13:17:14.223248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.827 [2024-12-06 13:17:14.223255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.827 [2024-12-06 13:17:14.223997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.827 [2024-12-06 13:17:14.224026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.827 [2024-12-06 13:17:14.224117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.827 [2024-12-06 13:17:14.224123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.827 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 [2024-12-06 13:17:14.352387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 Malloc0 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 [2024-12-06 13:17:14.409960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 [ 00:23:08.085 { 00:23:08.085 "allow_any_host": true, 00:23:08.085 "hosts": [], 00:23:08.085 "listen_addresses": [], 00:23:08.085 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.085 "subtype": "Discovery" 00:23:08.085 }, 00:23:08.085 { 00:23:08.085 "allow_any_host": true, 00:23:08.085 "hosts": [], 00:23:08.085 "listen_addresses": [ 00:23:08.085 { 00:23:08.085 "adrfam": "IPv4", 00:23:08.085 "traddr": "10.0.0.3", 00:23:08.085 "trsvcid": "4420", 00:23:08.085 "trtype": "TCP" 00:23:08.085 } 00:23:08.085 ], 00:23:08.085 "max_cntlid": 65519, 00:23:08.085 "max_namespaces": 2, 00:23:08.085 "min_cntlid": 1, 00:23:08.085 "model_number": "SPDK bdev Controller", 00:23:08.085 "namespaces": [ 00:23:08.085 { 00:23:08.085 "bdev_name": "Malloc0", 00:23:08.085 "name": "Malloc0", 00:23:08.085 "nguid": "1DBEECB6CB6A48F6A0FF289DC02B101D", 00:23:08.085 "nsid": 1, 00:23:08.085 "uuid": "1dbeecb6-cb6a-48f6-a0ff-289dc02b101d" 00:23:08.085 } 00:23:08.085 ], 00:23:08.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.085 "serial_number": "SPDK00000000000001", 00:23:08.085 "subtype": "NVMe" 00:23:08.085 } 00:23:08.085 ] 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86823 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:08.085 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.344 Malloc1 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.344 [ 00:23:08.344 { 00:23:08.344 "allow_any_host": true, 00:23:08.344 "hosts": [], 00:23:08.344 "listen_addresses": [], 00:23:08.344 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.344 "subtype": "Discovery" 00:23:08.344 }, 00:23:08.344 { 00:23:08.344 "allow_any_host": true, 00:23:08.344 "hosts": [], 00:23:08.344 "listen_addresses": [ 00:23:08.344 { 00:23:08.344 "adrfam": "IPv4", 00:23:08.344 "traddr": "10.0.0.3", 00:23:08.344 "trsvcid": "4420", 00:23:08.344 "trtype": "TCP" 00:23:08.344 } 00:23:08.344 ], 00:23:08.344 "max_cntlid": 65519, 00:23:08.344 "max_namespaces": 2, 00:23:08.344 "min_cntlid": 1, 00:23:08.344 Asynchronous Event Request test 00:23:08.344 Attaching to 10.0.0.3 00:23:08.344 Attached to 10.0.0.3 00:23:08.344 Registering asynchronous event callbacks... 00:23:08.344 Starting namespace attribute notice tests for all controllers... 00:23:08.344 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:08.344 aer_cb - Changed Namespace 00:23:08.344 Cleaning up... 00:23:08.344 "model_number": "SPDK bdev Controller", 00:23:08.344 "namespaces": [ 00:23:08.344 { 00:23:08.344 "bdev_name": "Malloc0", 00:23:08.344 "name": "Malloc0", 00:23:08.344 "nguid": "1DBEECB6CB6A48F6A0FF289DC02B101D", 00:23:08.344 "nsid": 1, 00:23:08.344 "uuid": "1dbeecb6-cb6a-48f6-a0ff-289dc02b101d" 00:23:08.344 }, 00:23:08.344 { 00:23:08.344 "bdev_name": "Malloc1", 00:23:08.344 "name": "Malloc1", 00:23:08.344 "nguid": "15D6B529996C4164A59D12F28A1B0B71", 00:23:08.344 "nsid": 2, 00:23:08.344 "uuid": "15d6b529-996c-4164-a59d-12f28a1b0b71" 00:23:08.344 } 00:23:08.344 ], 00:23:08.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.344 "serial_number": "SPDK00000000000001", 00:23:08.344 "subtype": "NVMe" 00:23:08.344 } 00:23:08.344 ] 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86823 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.344 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.602 rmmod nvme_tcp 00:23:08.602 rmmod nvme_fabrics 00:23:08.602 rmmod nvme_keyring 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 86779 ']' 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 86779 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 86779 ']' 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 86779 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.602 13:17:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86779 00:23:08.602 killing process with pid 86779 00:23:08.602 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.602 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.602 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86779' 00:23:08.602 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 86779 00:23:08.602 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 86779 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.865 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:23:09.140 00:23:09.140 real 0m2.069s 00:23:09.140 user 0m4.053s 00:23:09.140 sys 0m0.657s 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 ************************************ 00:23:09.140 END TEST nvmf_aer 00:23:09.140 ************************************ 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 ************************************ 00:23:09.140 START TEST nvmf_async_init 00:23:09.140 ************************************ 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:09.140 * Looking for test storage... 00:23:09.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.140 --rc genhtml_branch_coverage=1 00:23:09.140 --rc genhtml_function_coverage=1 00:23:09.140 --rc genhtml_legend=1 00:23:09.140 --rc geninfo_all_blocks=1 00:23:09.140 --rc geninfo_unexecuted_blocks=1 00:23:09.140 00:23:09.140 ' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.140 --rc genhtml_branch_coverage=1 00:23:09.140 --rc genhtml_function_coverage=1 00:23:09.140 --rc genhtml_legend=1 00:23:09.140 --rc geninfo_all_blocks=1 00:23:09.140 --rc geninfo_unexecuted_blocks=1 00:23:09.140 00:23:09.140 ' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.140 --rc genhtml_branch_coverage=1 00:23:09.140 --rc genhtml_function_coverage=1 00:23:09.140 --rc genhtml_legend=1 00:23:09.140 --rc geninfo_all_blocks=1 00:23:09.140 --rc geninfo_unexecuted_blocks=1 00:23:09.140 00:23:09.140 ' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:09.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.140 --rc genhtml_branch_coverage=1 00:23:09.140 --rc genhtml_function_coverage=1 00:23:09.140 --rc genhtml_legend=1 00:23:09.140 --rc geninfo_all_blocks=1 00:23:09.140 --rc geninfo_unexecuted_blocks=1 00:23:09.140 00:23:09.140 ' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.140 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.141 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=78e6c5e5ed9947d284d81d9988d111cf 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:09.141 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:09.398 Cannot find device "nvmf_init_br" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:09.399 Cannot find device "nvmf_init_br2" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:09.399 Cannot find device "nvmf_tgt_br" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:09.399 Cannot find device "nvmf_tgt_br2" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:09.399 Cannot find device "nvmf_init_br" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:09.399 Cannot find device "nvmf_init_br2" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:09.399 Cannot find device "nvmf_tgt_br" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:09.399 Cannot find device "nvmf_tgt_br2" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:09.399 Cannot find device "nvmf_br" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:09.399 Cannot find device "nvmf_init_if" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:09.399 Cannot find device "nvmf_init_if2" 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:09.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:09.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:09.399 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:09.656 13:17:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:09.656 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:09.656 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:23:09.656 00:23:09.656 --- 10.0.0.3 ping statistics --- 00:23:09.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.656 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:09.656 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:09.656 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:23:09.656 00:23:09.656 --- 10.0.0.4 ping statistics --- 00:23:09.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.656 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:09.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:23:09.656 00:23:09.656 --- 10.0.0.1 ping statistics --- 00:23:09.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.656 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:09.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:23:09.656 00:23:09.656 --- 10.0.0.2 ping statistics --- 00:23:09.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.656 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:09.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=87051 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 87051 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 87051 ']' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.656 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:09.656 [2024-12-06 13:17:16.115605] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:09.656 [2024-12-06 13:17:16.115882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.914 [2024-12-06 13:17:16.262294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.914 [2024-12-06 13:17:16.299998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.914 [2024-12-06 13:17:16.300254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.914 [2024-12-06 13:17:16.300279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.914 [2024-12-06 13:17:16.300290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.914 [2024-12-06 13:17:16.300299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.914 [2024-12-06 13:17:16.300667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:09.914 [2024-12-06 13:17:16.432777] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.914 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.171 null0 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 78e6c5e5ed9947d284d81d9988d111cf 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.171 [2024-12-06 13:17:16.476900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.171 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.441 nvme0n1 00:23:10.441 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.441 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:10.441 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.441 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.441 [ 00:23:10.441 { 00:23:10.441 "aliases": [ 00:23:10.441 "78e6c5e5-ed99-47d2-84d8-1d9988d111cf" 00:23:10.441 ], 00:23:10.441 "assigned_rate_limits": { 00:23:10.441 "r_mbytes_per_sec": 0, 00:23:10.441 "rw_ios_per_sec": 0, 00:23:10.441 "rw_mbytes_per_sec": 0, 00:23:10.441 "w_mbytes_per_sec": 0 00:23:10.441 }, 00:23:10.441 "block_size": 512, 00:23:10.441 "claimed": false, 00:23:10.441 "driver_specific": { 00:23:10.441 "mp_policy": "active_passive", 00:23:10.441 "nvme": [ 00:23:10.441 { 00:23:10.441 "ctrlr_data": { 00:23:10.441 "ana_reporting": false, 00:23:10.441 "cntlid": 1, 00:23:10.441 "firmware_revision": "25.01", 00:23:10.441 "model_number": "SPDK bdev Controller", 00:23:10.441 "multi_ctrlr": true, 00:23:10.441 "oacs": { 00:23:10.441 "firmware": 0, 00:23:10.441 "format": 0, 00:23:10.441 "ns_manage": 0, 00:23:10.441 "security": 0 00:23:10.441 }, 00:23:10.441 "serial_number": "00000000000000000000", 00:23:10.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.441 "vendor_id": "0x8086" 00:23:10.441 }, 00:23:10.441 "ns_data": { 00:23:10.441 "can_share": true, 00:23:10.441 "id": 1 00:23:10.441 }, 00:23:10.441 "trid": { 00:23:10.441 "adrfam": "IPv4", 00:23:10.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.441 "traddr": "10.0.0.3", 00:23:10.441 "trsvcid": "4420", 00:23:10.441 "trtype": "TCP" 00:23:10.441 }, 00:23:10.441 "vs": { 00:23:10.441 "nvme_version": "1.3" 00:23:10.441 } 00:23:10.441 } 00:23:10.441 ] 00:23:10.441 }, 00:23:10.441 "memory_domains": [ 00:23:10.441 { 00:23:10.441 "dma_device_id": "system", 00:23:10.441 "dma_device_type": 1 00:23:10.441 } 00:23:10.441 ], 00:23:10.441 "name": "nvme0n1", 00:23:10.441 "num_blocks": 2097152, 00:23:10.441 "numa_id": -1, 00:23:10.441 "product_name": "NVMe disk", 00:23:10.441 "supported_io_types": { 00:23:10.442 "abort": true, 00:23:10.442 "compare": true, 00:23:10.442 "compare_and_write": true, 00:23:10.442 "copy": true, 00:23:10.442 "flush": true, 00:23:10.442 "get_zone_info": false, 00:23:10.442 "nvme_admin": true, 00:23:10.442 "nvme_io": true, 00:23:10.442 "nvme_io_md": false, 00:23:10.442 "nvme_iov_md": false, 00:23:10.442 "read": true, 00:23:10.442 "reset": true, 00:23:10.442 "seek_data": false, 00:23:10.442 "seek_hole": false, 00:23:10.442 "unmap": false, 00:23:10.442 "write": true, 00:23:10.442 "write_zeroes": true, 00:23:10.442 "zcopy": false, 00:23:10.442 "zone_append": false, 00:23:10.442 "zone_management": false 00:23:10.442 }, 00:23:10.442 "uuid": "78e6c5e5-ed99-47d2-84d8-1d9988d111cf", 00:23:10.442 "zoned": false 00:23:10.442 } 00:23:10.442 ] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.442 [2024-12-06 13:17:16.752803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:10.442 [2024-12-06 13:17:16.752935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1178130 (9): Bad file descriptor 00:23:10.442 [2024-12-06 13:17:16.885151] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.442 [ 00:23:10.442 { 00:23:10.442 "aliases": [ 00:23:10.442 "78e6c5e5-ed99-47d2-84d8-1d9988d111cf" 00:23:10.442 ], 00:23:10.442 "assigned_rate_limits": { 00:23:10.442 "r_mbytes_per_sec": 0, 00:23:10.442 "rw_ios_per_sec": 0, 00:23:10.442 "rw_mbytes_per_sec": 0, 00:23:10.442 "w_mbytes_per_sec": 0 00:23:10.442 }, 00:23:10.442 "block_size": 512, 00:23:10.442 "claimed": false, 00:23:10.442 "driver_specific": { 00:23:10.442 "mp_policy": "active_passive", 00:23:10.442 "nvme": [ 00:23:10.442 { 00:23:10.442 "ctrlr_data": { 00:23:10.442 "ana_reporting": false, 00:23:10.442 "cntlid": 2, 00:23:10.442 "firmware_revision": "25.01", 00:23:10.442 "model_number": "SPDK bdev Controller", 00:23:10.442 "multi_ctrlr": true, 00:23:10.442 "oacs": { 00:23:10.442 "firmware": 0, 00:23:10.442 "format": 0, 00:23:10.442 "ns_manage": 0, 00:23:10.442 "security": 0 00:23:10.442 }, 00:23:10.442 "serial_number": "00000000000000000000", 00:23:10.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.442 "vendor_id": "0x8086" 00:23:10.442 }, 00:23:10.442 "ns_data": { 00:23:10.442 "can_share": true, 00:23:10.442 "id": 1 00:23:10.442 }, 00:23:10.442 "trid": { 00:23:10.442 "adrfam": "IPv4", 00:23:10.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.442 "traddr": "10.0.0.3", 00:23:10.442 "trsvcid": "4420", 00:23:10.442 "trtype": "TCP" 00:23:10.442 }, 00:23:10.442 "vs": { 00:23:10.442 "nvme_version": "1.3" 00:23:10.442 } 00:23:10.442 } 00:23:10.442 ] 00:23:10.442 }, 00:23:10.442 "memory_domains": [ 00:23:10.442 { 00:23:10.442 "dma_device_id": "system", 00:23:10.442 "dma_device_type": 1 00:23:10.442 } 00:23:10.442 ], 00:23:10.442 "name": "nvme0n1", 00:23:10.442 "num_blocks": 2097152, 00:23:10.442 "numa_id": -1, 00:23:10.442 "product_name": "NVMe disk", 00:23:10.442 "supported_io_types": { 00:23:10.442 "abort": true, 00:23:10.442 "compare": true, 00:23:10.442 "compare_and_write": true, 00:23:10.442 "copy": true, 00:23:10.442 "flush": true, 00:23:10.442 "get_zone_info": false, 00:23:10.442 "nvme_admin": true, 00:23:10.442 "nvme_io": true, 00:23:10.442 "nvme_io_md": false, 00:23:10.442 "nvme_iov_md": false, 00:23:10.442 "read": true, 00:23:10.442 "reset": true, 00:23:10.442 "seek_data": false, 00:23:10.442 "seek_hole": false, 00:23:10.442 "unmap": false, 00:23:10.442 "write": true, 00:23:10.442 "write_zeroes": true, 00:23:10.442 "zcopy": false, 00:23:10.442 "zone_append": false, 00:23:10.442 "zone_management": false 00:23:10.442 }, 00:23:10.442 "uuid": "78e6c5e5-ed99-47d2-84d8-1d9988d111cf", 00:23:10.442 "zoned": false 00:23:10.442 } 00:23:10.442 ] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.C3IVE1LIgJ 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.C3IVE1LIgJ 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.C3IVE1LIgJ 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.442 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.700 [2024-12-06 13:17:16.960995] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.700 [2024-12-06 13:17:16.961182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.700 13:17:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.700 [2024-12-06 13:17:16.981016] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.700 nvme0n1 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.700 [ 00:23:10.700 { 00:23:10.700 "aliases": [ 00:23:10.700 "78e6c5e5-ed99-47d2-84d8-1d9988d111cf" 00:23:10.700 ], 00:23:10.700 "assigned_rate_limits": { 00:23:10.700 "r_mbytes_per_sec": 0, 00:23:10.700 "rw_ios_per_sec": 0, 00:23:10.700 "rw_mbytes_per_sec": 0, 00:23:10.700 "w_mbytes_per_sec": 0 00:23:10.700 }, 00:23:10.700 "block_size": 512, 00:23:10.700 "claimed": false, 00:23:10.700 "driver_specific": { 00:23:10.700 "mp_policy": "active_passive", 00:23:10.700 "nvme": [ 00:23:10.700 { 00:23:10.700 "ctrlr_data": { 00:23:10.700 "ana_reporting": false, 00:23:10.700 "cntlid": 3, 00:23:10.700 "firmware_revision": "25.01", 00:23:10.700 "model_number": "SPDK bdev Controller", 00:23:10.700 "multi_ctrlr": true, 00:23:10.700 "oacs": { 00:23:10.700 "firmware": 0, 00:23:10.700 "format": 0, 00:23:10.700 "ns_manage": 0, 00:23:10.700 "security": 0 00:23:10.700 }, 00:23:10.700 "serial_number": "00000000000000000000", 00:23:10.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.700 "vendor_id": "0x8086" 00:23:10.700 }, 00:23:10.700 "ns_data": { 00:23:10.700 "can_share": true, 00:23:10.700 "id": 1 00:23:10.700 }, 00:23:10.700 "trid": { 00:23:10.700 "adrfam": "IPv4", 00:23:10.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.700 "traddr": "10.0.0.3", 00:23:10.700 "trsvcid": "4421", 00:23:10.700 "trtype": "TCP" 00:23:10.700 }, 00:23:10.700 "vs": { 00:23:10.700 "nvme_version": "1.3" 00:23:10.700 } 00:23:10.700 } 00:23:10.700 ] 00:23:10.700 }, 00:23:10.700 "memory_domains": [ 00:23:10.700 { 00:23:10.700 "dma_device_id": "system", 00:23:10.700 "dma_device_type": 1 00:23:10.700 } 00:23:10.700 ], 00:23:10.700 "name": "nvme0n1", 00:23:10.700 "num_blocks": 2097152, 00:23:10.700 "numa_id": -1, 00:23:10.700 "product_name": "NVMe disk", 00:23:10.700 "supported_io_types": { 00:23:10.700 "abort": true, 00:23:10.700 "compare": true, 00:23:10.700 "compare_and_write": true, 00:23:10.700 "copy": true, 00:23:10.700 "flush": true, 00:23:10.700 "get_zone_info": false, 00:23:10.700 "nvme_admin": true, 00:23:10.700 "nvme_io": true, 00:23:10.700 "nvme_io_md": false, 00:23:10.700 "nvme_iov_md": false, 00:23:10.700 "read": true, 00:23:10.700 "reset": true, 00:23:10.700 "seek_data": false, 00:23:10.700 "seek_hole": false, 00:23:10.700 "unmap": false, 00:23:10.700 "write": true, 00:23:10.700 "write_zeroes": true, 00:23:10.700 "zcopy": false, 00:23:10.700 "zone_append": false, 00:23:10.700 "zone_management": false 00:23:10.700 }, 00:23:10.700 "uuid": "78e6c5e5-ed99-47d2-84d8-1d9988d111cf", 00:23:10.700 "zoned": false 00:23:10.700 } 00:23:10.700 ] 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.C3IVE1LIgJ 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.700 rmmod nvme_tcp 00:23:10.700 rmmod nvme_fabrics 00:23:10.700 rmmod nvme_keyring 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 87051 ']' 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 87051 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 87051 ']' 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 87051 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.700 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87051 00:23:10.959 killing process with pid 87051 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87051' 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 87051 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 87051 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:10.959 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:23:11.216 00:23:11.216 real 0m2.150s 00:23:11.216 user 0m1.607s 00:23:11.216 sys 0m0.630s 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.216 ************************************ 00:23:11.216 END TEST nvmf_async_init 00:23:11.216 ************************************ 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.216 ************************************ 00:23:11.216 START TEST dma 00:23:11.216 ************************************ 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:11.216 * Looking for test storage... 00:23:11.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.216 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.476 --rc genhtml_branch_coverage=1 00:23:11.476 --rc genhtml_function_coverage=1 00:23:11.476 --rc genhtml_legend=1 00:23:11.476 --rc geninfo_all_blocks=1 00:23:11.476 --rc geninfo_unexecuted_blocks=1 00:23:11.476 00:23:11.476 ' 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.476 --rc genhtml_branch_coverage=1 00:23:11.476 --rc genhtml_function_coverage=1 00:23:11.476 --rc genhtml_legend=1 00:23:11.476 --rc geninfo_all_blocks=1 00:23:11.476 --rc geninfo_unexecuted_blocks=1 00:23:11.476 00:23:11.476 ' 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.476 --rc genhtml_branch_coverage=1 00:23:11.476 --rc genhtml_function_coverage=1 00:23:11.476 --rc genhtml_legend=1 00:23:11.476 --rc geninfo_all_blocks=1 00:23:11.476 --rc geninfo_unexecuted_blocks=1 00:23:11.476 00:23:11.476 ' 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.476 --rc genhtml_branch_coverage=1 00:23:11.476 --rc genhtml_function_coverage=1 00:23:11.476 --rc genhtml_legend=1 00:23:11.476 --rc geninfo_all_blocks=1 00:23:11.476 --rc geninfo_unexecuted_blocks=1 00:23:11.476 00:23:11.476 ' 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.476 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:11.477 00:23:11.477 real 0m0.196s 00:23:11.477 user 0m0.119s 00:23:11.477 sys 0m0.082s 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.477 ************************************ 00:23:11.477 END TEST dma 00:23:11.477 ************************************ 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.477 ************************************ 00:23:11.477 START TEST nvmf_identify 00:23:11.477 ************************************ 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:11.477 * Looking for test storage... 00:23:11.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.477 13:17:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.736 --rc genhtml_branch_coverage=1 00:23:11.736 --rc genhtml_function_coverage=1 00:23:11.736 --rc genhtml_legend=1 00:23:11.736 --rc geninfo_all_blocks=1 00:23:11.736 --rc geninfo_unexecuted_blocks=1 00:23:11.736 00:23:11.736 ' 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.736 --rc genhtml_branch_coverage=1 00:23:11.736 --rc genhtml_function_coverage=1 00:23:11.736 --rc genhtml_legend=1 00:23:11.736 --rc geninfo_all_blocks=1 00:23:11.736 --rc geninfo_unexecuted_blocks=1 00:23:11.736 00:23:11.736 ' 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.736 --rc genhtml_branch_coverage=1 00:23:11.736 --rc genhtml_function_coverage=1 00:23:11.736 --rc genhtml_legend=1 00:23:11.736 --rc geninfo_all_blocks=1 00:23:11.736 --rc geninfo_unexecuted_blocks=1 00:23:11.736 00:23:11.736 ' 00:23:11.736 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.736 --rc genhtml_branch_coverage=1 00:23:11.736 --rc genhtml_function_coverage=1 00:23:11.737 --rc genhtml_legend=1 00:23:11.737 --rc geninfo_all_blocks=1 00:23:11.737 --rc geninfo_unexecuted_blocks=1 00:23:11.737 00:23:11.737 ' 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.737 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:11.737 Cannot find device "nvmf_init_br" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:11.737 Cannot find device "nvmf_init_br2" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:11.737 Cannot find device "nvmf_tgt_br" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.737 Cannot find device "nvmf_tgt_br2" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:11.737 Cannot find device "nvmf_init_br" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:11.737 Cannot find device "nvmf_init_br2" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:11.737 Cannot find device "nvmf_tgt_br" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:11.737 Cannot find device "nvmf_tgt_br2" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:11.737 Cannot find device "nvmf_br" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:11.737 Cannot find device "nvmf_init_if" 00:23:11.737 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:11.738 Cannot find device "nvmf_init_if2" 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.738 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.996 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:12.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:12.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:23:12.255 00:23:12.255 --- 10.0.0.3 ping statistics --- 00:23:12.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.255 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:12.255 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:12.255 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:23:12.255 00:23:12.255 --- 10.0.0.4 ping statistics --- 00:23:12.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.255 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:12.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:12.255 00:23:12.255 --- 10.0.0.1 ping statistics --- 00:23:12.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.255 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:12.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:23:12.255 00:23:12.255 --- 10.0.0.2 ping statistics --- 00:23:12.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.255 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87361 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.255 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87361 00:23:12.256 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 87361 ']' 00:23:12.256 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.256 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.256 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.256 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.256 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.256 [2024-12-06 13:17:18.636165] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:12.256 [2024-12-06 13:17:18.636249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.515 [2024-12-06 13:17:18.791314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.515 [2024-12-06 13:17:18.832206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.515 [2024-12-06 13:17:18.832260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.515 [2024-12-06 13:17:18.832273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.515 [2024-12-06 13:17:18.832283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.515 [2024-12-06 13:17:18.832292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.515 [2024-12-06 13:17:18.833120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.515 [2024-12-06 13:17:18.833228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.515 [2024-12-06 13:17:18.833332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.515 [2024-12-06 13:17:18.833339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.515 [2024-12-06 13:17:18.934842] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.515 13:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.515 Malloc0 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.515 [2024-12-06 13:17:19.021106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:12.515 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.516 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.516 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.516 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:12.516 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.516 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.776 [ 00:23:12.776 { 00:23:12.776 "allow_any_host": true, 00:23:12.776 "hosts": [], 00:23:12.776 "listen_addresses": [ 00:23:12.776 { 00:23:12.776 "adrfam": "IPv4", 00:23:12.776 "traddr": "10.0.0.3", 00:23:12.776 "trsvcid": "4420", 00:23:12.776 "trtype": "TCP" 00:23:12.776 } 00:23:12.776 ], 00:23:12.776 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.776 "subtype": "Discovery" 00:23:12.776 }, 00:23:12.776 { 00:23:12.776 "allow_any_host": true, 00:23:12.776 "hosts": [], 00:23:12.776 "listen_addresses": [ 00:23:12.776 { 00:23:12.776 "adrfam": "IPv4", 00:23:12.776 "traddr": "10.0.0.3", 00:23:12.776 "trsvcid": "4420", 00:23:12.776 "trtype": "TCP" 00:23:12.776 } 00:23:12.776 ], 00:23:12.776 "max_cntlid": 65519, 00:23:12.776 "max_namespaces": 32, 00:23:12.776 "min_cntlid": 1, 00:23:12.776 "model_number": "SPDK bdev Controller", 00:23:12.776 "namespaces": [ 00:23:12.776 { 00:23:12.776 "bdev_name": "Malloc0", 00:23:12.776 "eui64": "ABCDEF0123456789", 00:23:12.776 "name": "Malloc0", 00:23:12.776 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:12.776 "nsid": 1, 00:23:12.776 "uuid": "5681ab2a-a853-412f-bd8f-22d3eed293ab" 00:23:12.776 } 00:23:12.776 ], 00:23:12.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.776 "serial_number": "SPDK00000000000001", 00:23:12.776 "subtype": "NVMe" 00:23:12.776 } 00:23:12.776 ] 00:23:12.776 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.776 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:12.776 [2024-12-06 13:17:19.067888] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:12.776 [2024-12-06 13:17:19.068069] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87406 ] 00:23:12.776 [2024-12-06 13:17:19.236388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:12.776 [2024-12-06 13:17:19.236474] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:12.776 [2024-12-06 13:17:19.236484] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:12.776 [2024-12-06 13:17:19.236502] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:12.776 [2024-12-06 13:17:19.236514] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:12.776 [2024-12-06 13:17:19.236989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:12.776 [2024-12-06 13:17:19.237067] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2430d90 0 00:23:12.776 [2024-12-06 13:17:19.241977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:12.776 [2024-12-06 13:17:19.242010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:12.776 [2024-12-06 13:17:19.242018] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:12.776 [2024-12-06 13:17:19.242023] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:12.776 [2024-12-06 13:17:19.242064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.242074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.242080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.776 [2024-12-06 13:17:19.242100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:12.776 [2024-12-06 13:17:19.242142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.776 [2024-12-06 13:17:19.249973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.776 [2024-12-06 13:17:19.250004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.776 [2024-12-06 13:17:19.250011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.776 [2024-12-06 13:17:19.250031] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:12.776 [2024-12-06 13:17:19.250042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:12.776 [2024-12-06 13:17:19.250051] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:12.776 [2024-12-06 13:17:19.250074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.776 [2024-12-06 13:17:19.250100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.776 [2024-12-06 13:17:19.250139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.776 [2024-12-06 13:17:19.250218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.776 [2024-12-06 13:17:19.250229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.776 [2024-12-06 13:17:19.250234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.776 [2024-12-06 13:17:19.250247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:12.776 [2024-12-06 13:17:19.250258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:12.776 [2024-12-06 13:17:19.250269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.776 [2024-12-06 13:17:19.250291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.776 [2024-12-06 13:17:19.250320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.776 [2024-12-06 13:17:19.250376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.776 [2024-12-06 13:17:19.250387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.776 [2024-12-06 13:17:19.250392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.776 [2024-12-06 13:17:19.250406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:12.776 [2024-12-06 13:17:19.250417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:12.776 [2024-12-06 13:17:19.250428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.776 [2024-12-06 13:17:19.250448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.776 [2024-12-06 13:17:19.250479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.776 [2024-12-06 13:17:19.250536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.776 [2024-12-06 13:17:19.250545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.776 [2024-12-06 13:17:19.250550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.776 [2024-12-06 13:17:19.250563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:12.776 [2024-12-06 13:17:19.250576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.776 [2024-12-06 13:17:19.250588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.250598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.777 [2024-12-06 13:17:19.250622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.777 [2024-12-06 13:17:19.250675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.777 [2024-12-06 13:17:19.250684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.777 [2024-12-06 13:17:19.250689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.250695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.777 [2024-12-06 13:17:19.250702] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:12.777 [2024-12-06 13:17:19.250709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:12.777 [2024-12-06 13:17:19.250719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:12.777 [2024-12-06 13:17:19.250834] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:12.777 [2024-12-06 13:17:19.250843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:12.777 [2024-12-06 13:17:19.250856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.250862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.250867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.250877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.777 [2024-12-06 13:17:19.250905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.777 [2024-12-06 13:17:19.250971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.777 [2024-12-06 13:17:19.250982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.777 [2024-12-06 13:17:19.250990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.250999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.777 [2024-12-06 13:17:19.251010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:12.777 [2024-12-06 13:17:19.251027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.777 [2024-12-06 13:17:19.251077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.777 [2024-12-06 13:17:19.251145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.777 [2024-12-06 13:17:19.251157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.777 [2024-12-06 13:17:19.251163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.777 [2024-12-06 13:17:19.251175] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:12.777 [2024-12-06 13:17:19.251182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:12.777 [2024-12-06 13:17:19.251193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:12.777 [2024-12-06 13:17:19.251207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:12.777 [2024-12-06 13:17:19.251222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.777 [2024-12-06 13:17:19.251266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.777 [2024-12-06 13:17:19.251362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.777 [2024-12-06 13:17:19.251384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.777 [2024-12-06 13:17:19.251391] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251396] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2430d90): datao=0, datal=4096, cccid=0 00:23:12.777 [2024-12-06 13:17:19.251403] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2471600) on tqpair(0x2430d90): expected_datao=0, payload_size=4096 00:23:12.777 [2024-12-06 13:17:19.251410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251420] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251427] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.777 [2024-12-06 13:17:19.251447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.777 [2024-12-06 13:17:19.251452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.777 [2024-12-06 13:17:19.251469] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:12.777 [2024-12-06 13:17:19.251476] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:12.777 [2024-12-06 13:17:19.251483] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:12.777 [2024-12-06 13:17:19.251490] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:12.777 [2024-12-06 13:17:19.251496] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:12.777 [2024-12-06 13:17:19.251503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:12.777 [2024-12-06 13:17:19.251515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:12.777 [2024-12-06 13:17:19.251526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:12.777 [2024-12-06 13:17:19.251577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.777 [2024-12-06 13:17:19.251642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.777 [2024-12-06 13:17:19.251654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.777 [2024-12-06 13:17:19.251660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:12.777 [2024-12-06 13:17:19.251681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.777 [2024-12-06 13:17:19.251711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.777 [2024-12-06 13:17:19.251737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.777 [2024-12-06 13:17:19.251764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.777 [2024-12-06 13:17:19.251789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:12.777 [2024-12-06 13:17:19.251801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:12.777 [2024-12-06 13:17:19.251810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.777 [2024-12-06 13:17:19.251815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2430d90) 00:23:12.777 [2024-12-06 13:17:19.251825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.777 [2024-12-06 13:17:19.251852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471600, cid 0, qid 0 00:23:12.777 [2024-12-06 13:17:19.251862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471780, cid 1, qid 0 00:23:12.777 [2024-12-06 13:17:19.251869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471900, cid 2, qid 0 00:23:12.777 [2024-12-06 13:17:19.251875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:12.777 [2024-12-06 13:17:19.251882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471c00, cid 4, qid 0 00:23:12.777 [2024-12-06 13:17:19.251985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.777 [2024-12-06 13:17:19.251996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.777 [2024-12-06 13:17:19.252001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471c00) on tqpair=0x2430d90 00:23:12.778 [2024-12-06 13:17:19.252014] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:12.778 [2024-12-06 13:17:19.252030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:12.778 [2024-12-06 13:17:19.252049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2430d90) 00:23:12.778 [2024-12-06 13:17:19.252076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.778 [2024-12-06 13:17:19.252119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471c00, cid 4, qid 0 00:23:12.778 [2024-12-06 13:17:19.252194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.778 [2024-12-06 13:17:19.252217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.778 [2024-12-06 13:17:19.252223] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252229] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2430d90): datao=0, datal=4096, cccid=4 00:23:12.778 [2024-12-06 13:17:19.252235] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2471c00) on tqpair(0x2430d90): expected_datao=0, payload_size=4096 00:23:12.778 [2024-12-06 13:17:19.252241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252251] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.778 [2024-12-06 13:17:19.252283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.778 [2024-12-06 13:17:19.252287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471c00) on tqpair=0x2430d90 00:23:12.778 [2024-12-06 13:17:19.252311] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:12.778 [2024-12-06 13:17:19.252350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2430d90) 00:23:12.778 [2024-12-06 13:17:19.252376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.778 [2024-12-06 13:17:19.252387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2430d90) 00:23:12.778 [2024-12-06 13:17:19.252407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.778 [2024-12-06 13:17:19.252443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471c00, cid 4, qid 0 00:23:12.778 [2024-12-06 13:17:19.252460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471d80, cid 5, qid 0 00:23:12.778 [2024-12-06 13:17:19.252569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.778 [2024-12-06 13:17:19.252588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.778 [2024-12-06 13:17:19.252595] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252600] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2430d90): datao=0, datal=1024, cccid=4 00:23:12.778 [2024-12-06 13:17:19.252606] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2471c00) on tqpair(0x2430d90): expected_datao=0, payload_size=1024 00:23:12.778 [2024-12-06 13:17:19.252612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252622] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252627] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.778 [2024-12-06 13:17:19.252643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.778 [2024-12-06 13:17:19.252648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.252653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471d80) on tqpair=0x2430d90 00:23:12.778 [2024-12-06 13:17:19.297974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.778 [2024-12-06 13:17:19.298014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.778 [2024-12-06 13:17:19.298022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471c00) on tqpair=0x2430d90 00:23:12.778 [2024-12-06 13:17:19.298057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2430d90) 00:23:12.778 [2024-12-06 13:17:19.298089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.778 [2024-12-06 13:17:19.298137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471c00, cid 4, qid 0 00:23:12.778 [2024-12-06 13:17:19.298254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.778 [2024-12-06 13:17:19.298263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.778 [2024-12-06 13:17:19.298267] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298271] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2430d90): datao=0, datal=3072, cccid=4 00:23:12.778 [2024-12-06 13:17:19.298276] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2471c00) on tqpair(0x2430d90): expected_datao=0, payload_size=3072 00:23:12.778 [2024-12-06 13:17:19.298282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298292] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298297] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.778 [2024-12-06 13:17:19.298314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.778 [2024-12-06 13:17:19.298318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471c00) on tqpair=0x2430d90 00:23:12.778 [2024-12-06 13:17:19.298336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2430d90) 00:23:12.778 [2024-12-06 13:17:19.298351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.778 [2024-12-06 13:17:19.298390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471c00, cid 4, qid 0 00:23:12.778 [2024-12-06 13:17:19.298466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.778 [2024-12-06 13:17:19.298474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.778 [2024-12-06 13:17:19.298478] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298483] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2430d90): datao=0, datal=8, cccid=4 00:23:12.778 [2024-12-06 13:17:19.298488] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2471c00) on tqpair(0x2430d90): expected_datao=0, payload_size=8 00:23:12.778 [2024-12-06 13:17:19.298493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298501] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.778 [2024-12-06 13:17:19.298505] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.045 [2024-12-06 13:17:19.340030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.045 [2024-12-06 13:17:19.340067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.045 [2024-12-06 13:17:19.340074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.045 [2024-12-06 13:17:19.340080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471c00) on tqpair=0x2430d90 00:23:13.045 ===================================================== 00:23:13.045 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:13.045 ===================================================== 00:23:13.045 Controller Capabilities/Features 00:23:13.045 ================================ 00:23:13.045 Vendor ID: 0000 00:23:13.045 Subsystem Vendor ID: 0000 00:23:13.045 Serial Number: .................... 00:23:13.045 Model Number: ........................................ 00:23:13.045 Firmware Version: 25.01 00:23:13.045 Recommended Arb Burst: 0 00:23:13.045 IEEE OUI Identifier: 00 00 00 00:23:13.045 Multi-path I/O 00:23:13.045 May have multiple subsystem ports: No 00:23:13.045 May have multiple controllers: No 00:23:13.045 Associated with SR-IOV VF: No 00:23:13.045 Max Data Transfer Size: 131072 00:23:13.045 Max Number of Namespaces: 0 00:23:13.045 Max Number of I/O Queues: 1024 00:23:13.045 NVMe Specification Version (VS): 1.3 00:23:13.045 NVMe Specification Version (Identify): 1.3 00:23:13.045 Maximum Queue Entries: 128 00:23:13.045 Contiguous Queues Required: Yes 00:23:13.045 Arbitration Mechanisms Supported 00:23:13.045 Weighted Round Robin: Not Supported 00:23:13.045 Vendor Specific: Not Supported 00:23:13.045 Reset Timeout: 15000 ms 00:23:13.045 Doorbell Stride: 4 bytes 00:23:13.045 NVM Subsystem Reset: Not Supported 00:23:13.045 Command Sets Supported 00:23:13.045 NVM Command Set: Supported 00:23:13.045 Boot Partition: Not Supported 00:23:13.045 Memory Page Size Minimum: 4096 bytes 00:23:13.045 Memory Page Size Maximum: 4096 bytes 00:23:13.045 Persistent Memory Region: Not Supported 00:23:13.045 Optional Asynchronous Events Supported 00:23:13.045 Namespace Attribute Notices: Not Supported 00:23:13.045 Firmware Activation Notices: Not Supported 00:23:13.045 ANA Change Notices: Not Supported 00:23:13.045 PLE Aggregate Log Change Notices: Not Supported 00:23:13.045 LBA Status Info Alert Notices: Not Supported 00:23:13.045 EGE Aggregate Log Change Notices: Not Supported 00:23:13.045 Normal NVM Subsystem Shutdown event: Not Supported 00:23:13.045 Zone Descriptor Change Notices: Not Supported 00:23:13.045 Discovery Log Change Notices: Supported 00:23:13.045 Controller Attributes 00:23:13.045 128-bit Host Identifier: Not Supported 00:23:13.045 Non-Operational Permissive Mode: Not Supported 00:23:13.045 NVM Sets: Not Supported 00:23:13.045 Read Recovery Levels: Not Supported 00:23:13.045 Endurance Groups: Not Supported 00:23:13.045 Predictable Latency Mode: Not Supported 00:23:13.045 Traffic Based Keep ALive: Not Supported 00:23:13.045 Namespace Granularity: Not Supported 00:23:13.045 SQ Associations: Not Supported 00:23:13.045 UUID List: Not Supported 00:23:13.045 Multi-Domain Subsystem: Not Supported 00:23:13.045 Fixed Capacity Management: Not Supported 00:23:13.045 Variable Capacity Management: Not Supported 00:23:13.045 Delete Endurance Group: Not Supported 00:23:13.045 Delete NVM Set: Not Supported 00:23:13.045 Extended LBA Formats Supported: Not Supported 00:23:13.045 Flexible Data Placement Supported: Not Supported 00:23:13.045 00:23:13.045 Controller Memory Buffer Support 00:23:13.046 ================================ 00:23:13.046 Supported: No 00:23:13.046 00:23:13.046 Persistent Memory Region Support 00:23:13.046 ================================ 00:23:13.046 Supported: No 00:23:13.046 00:23:13.046 Admin Command Set Attributes 00:23:13.046 ============================ 00:23:13.046 Security Send/Receive: Not Supported 00:23:13.046 Format NVM: Not Supported 00:23:13.046 Firmware Activate/Download: Not Supported 00:23:13.046 Namespace Management: Not Supported 00:23:13.046 Device Self-Test: Not Supported 00:23:13.046 Directives: Not Supported 00:23:13.046 NVMe-MI: Not Supported 00:23:13.046 Virtualization Management: Not Supported 00:23:13.046 Doorbell Buffer Config: Not Supported 00:23:13.046 Get LBA Status Capability: Not Supported 00:23:13.046 Command & Feature Lockdown Capability: Not Supported 00:23:13.046 Abort Command Limit: 1 00:23:13.046 Async Event Request Limit: 4 00:23:13.046 Number of Firmware Slots: N/A 00:23:13.046 Firmware Slot 1 Read-Only: N/A 00:23:13.046 Firmware Activation Without Reset: N/A 00:23:13.046 Multiple Update Detection Support: N/A 00:23:13.046 Firmware Update Granularity: No Information Provided 00:23:13.046 Per-Namespace SMART Log: No 00:23:13.046 Asymmetric Namespace Access Log Page: Not Supported 00:23:13.046 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:13.046 Command Effects Log Page: Not Supported 00:23:13.046 Get Log Page Extended Data: Supported 00:23:13.046 Telemetry Log Pages: Not Supported 00:23:13.046 Persistent Event Log Pages: Not Supported 00:23:13.046 Supported Log Pages Log Page: May Support 00:23:13.046 Commands Supported & Effects Log Page: Not Supported 00:23:13.046 Feature Identifiers & Effects Log Page:May Support 00:23:13.046 NVMe-MI Commands & Effects Log Page: May Support 00:23:13.046 Data Area 4 for Telemetry Log: Not Supported 00:23:13.046 Error Log Page Entries Supported: 128 00:23:13.046 Keep Alive: Not Supported 00:23:13.046 00:23:13.046 NVM Command Set Attributes 00:23:13.046 ========================== 00:23:13.046 Submission Queue Entry Size 00:23:13.046 Max: 1 00:23:13.046 Min: 1 00:23:13.046 Completion Queue Entry Size 00:23:13.046 Max: 1 00:23:13.046 Min: 1 00:23:13.046 Number of Namespaces: 0 00:23:13.046 Compare Command: Not Supported 00:23:13.046 Write Uncorrectable Command: Not Supported 00:23:13.046 Dataset Management Command: Not Supported 00:23:13.046 Write Zeroes Command: Not Supported 00:23:13.046 Set Features Save Field: Not Supported 00:23:13.046 Reservations: Not Supported 00:23:13.046 Timestamp: Not Supported 00:23:13.046 Copy: Not Supported 00:23:13.046 Volatile Write Cache: Not Present 00:23:13.046 Atomic Write Unit (Normal): 1 00:23:13.046 Atomic Write Unit (PFail): 1 00:23:13.046 Atomic Compare & Write Unit: 1 00:23:13.046 Fused Compare & Write: Supported 00:23:13.046 Scatter-Gather List 00:23:13.046 SGL Command Set: Supported 00:23:13.046 SGL Keyed: Supported 00:23:13.046 SGL Bit Bucket Descriptor: Not Supported 00:23:13.046 SGL Metadata Pointer: Not Supported 00:23:13.046 Oversized SGL: Not Supported 00:23:13.046 SGL Metadata Address: Not Supported 00:23:13.046 SGL Offset: Supported 00:23:13.046 Transport SGL Data Block: Not Supported 00:23:13.046 Replay Protected Memory Block: Not Supported 00:23:13.046 00:23:13.046 Firmware Slot Information 00:23:13.046 ========================= 00:23:13.046 Active slot: 0 00:23:13.046 00:23:13.046 00:23:13.046 Error Log 00:23:13.046 ========= 00:23:13.046 00:23:13.046 Active Namespaces 00:23:13.046 ================= 00:23:13.046 Discovery Log Page 00:23:13.046 ================== 00:23:13.046 Generation Counter: 2 00:23:13.046 Number of Records: 2 00:23:13.046 Record Format: 0 00:23:13.046 00:23:13.046 Discovery Log Entry 0 00:23:13.046 ---------------------- 00:23:13.046 Transport Type: 3 (TCP) 00:23:13.046 Address Family: 1 (IPv4) 00:23:13.046 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:13.046 Entry Flags: 00:23:13.046 Duplicate Returned Information: 1 00:23:13.046 Explicit Persistent Connection Support for Discovery: 1 00:23:13.046 Transport Requirements: 00:23:13.046 Secure Channel: Not Required 00:23:13.046 Port ID: 0 (0x0000) 00:23:13.046 Controller ID: 65535 (0xffff) 00:23:13.046 Admin Max SQ Size: 128 00:23:13.046 Transport Service Identifier: 4420 00:23:13.046 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:13.046 Transport Address: 10.0.0.3 00:23:13.046 Discovery Log Entry 1 00:23:13.046 ---------------------- 00:23:13.046 Transport Type: 3 (TCP) 00:23:13.046 Address Family: 1 (IPv4) 00:23:13.046 Subsystem Type: 2 (NVM Subsystem) 00:23:13.046 Entry Flags: 00:23:13.046 Duplicate Returned Information: 0 00:23:13.046 Explicit Persistent Connection Support for Discovery: 0 00:23:13.046 Transport Requirements: 00:23:13.046 Secure Channel: Not Required 00:23:13.046 Port ID: 0 (0x0000) 00:23:13.046 Controller ID: 65535 (0xffff) 00:23:13.046 Admin Max SQ Size: 128 00:23:13.046 Transport Service Identifier: 4420 00:23:13.046 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:13.046 Transport Address: 10.0.0.3 [2024-12-06 13:17:19.340239] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:13.046 [2024-12-06 13:17:19.340261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471600) on tqpair=0x2430d90 00:23:13.046 [2024-12-06 13:17:19.340270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.046 [2024-12-06 13:17:19.340277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471780) on tqpair=0x2430d90 00:23:13.046 [2024-12-06 13:17:19.340282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.046 [2024-12-06 13:17:19.340288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471900) on tqpair=0x2430d90 00:23:13.046 [2024-12-06 13:17:19.340293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.046 [2024-12-06 13:17:19.340299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.046 [2024-12-06 13:17:19.340304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.046 [2024-12-06 13:17:19.340318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.046 [2024-12-06 13:17:19.340339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.046 [2024-12-06 13:17:19.340372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.046 [2024-12-06 13:17:19.340461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.046 [2024-12-06 13:17:19.340473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.046 [2024-12-06 13:17:19.340481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.046 [2024-12-06 13:17:19.340496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.046 [2024-12-06 13:17:19.340514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.046 [2024-12-06 13:17:19.340544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.046 [2024-12-06 13:17:19.340623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.046 [2024-12-06 13:17:19.340631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.046 [2024-12-06 13:17:19.340635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.046 [2024-12-06 13:17:19.340651] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:13.046 [2024-12-06 13:17:19.340657] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:13.046 [2024-12-06 13:17:19.340669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.046 [2024-12-06 13:17:19.340687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.046 [2024-12-06 13:17:19.340709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.046 [2024-12-06 13:17:19.340764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.046 [2024-12-06 13:17:19.340772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.046 [2024-12-06 13:17:19.340776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.046 [2024-12-06 13:17:19.340780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.340793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.340798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.340802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.340810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.340830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.340882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.340889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.340893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.340898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.340909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.340914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.340918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.340926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.340961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.341870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.341878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.341882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.341897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.341906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.341914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.341934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.345962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.345988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.345994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.345999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.346015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.346021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.346025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2430d90) 00:23:13.047 [2024-12-06 13:17:19.346035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.047 [2024-12-06 13:17:19.346074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2471a80, cid 3, qid 0 00:23:13.047 [2024-12-06 13:17:19.346136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.047 [2024-12-06 13:17:19.346146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.047 [2024-12-06 13:17:19.346150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.047 [2024-12-06 13:17:19.346154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2471a80) on tqpair=0x2430d90 00:23:13.047 [2024-12-06 13:17:19.346164] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:23:13.047 00:23:13.047 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:13.047 [2024-12-06 13:17:19.383284] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:13.047 [2024-12-06 13:17:19.383331] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87408 ] 00:23:13.048 [2024-12-06 13:17:19.543332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:13.048 [2024-12-06 13:17:19.543407] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:13.048 [2024-12-06 13:17:19.543414] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:13.048 [2024-12-06 13:17:19.543429] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:13.048 [2024-12-06 13:17:19.543440] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:13.048 [2024-12-06 13:17:19.543811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:13.048 [2024-12-06 13:17:19.543879] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4ccd90 0 00:23:13.048 [2024-12-06 13:17:19.549969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:13.048 [2024-12-06 13:17:19.549998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:13.048 [2024-12-06 13:17:19.550005] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:13.048 [2024-12-06 13:17:19.550009] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:13.048 [2024-12-06 13:17:19.550045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.550057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.550065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.048 [2024-12-06 13:17:19.550085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:13.048 [2024-12-06 13:17:19.550126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.048 [2024-12-06 13:17:19.557966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.048 [2024-12-06 13:17:19.557991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.048 [2024-12-06 13:17:19.557996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.048 [2024-12-06 13:17:19.558017] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:13.048 [2024-12-06 13:17:19.558027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:13.048 [2024-12-06 13:17:19.558034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:13.048 [2024-12-06 13:17:19.558059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.048 [2024-12-06 13:17:19.558090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.048 [2024-12-06 13:17:19.558125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.048 [2024-12-06 13:17:19.558206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.048 [2024-12-06 13:17:19.558215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.048 [2024-12-06 13:17:19.558220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.048 [2024-12-06 13:17:19.558232] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:13.048 [2024-12-06 13:17:19.558241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:13.048 [2024-12-06 13:17:19.558250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.048 [2024-12-06 13:17:19.558268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.048 [2024-12-06 13:17:19.558291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.048 [2024-12-06 13:17:19.558353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.048 [2024-12-06 13:17:19.558360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.048 [2024-12-06 13:17:19.558365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.048 [2024-12-06 13:17:19.558377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:13.048 [2024-12-06 13:17:19.558387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:13.048 [2024-12-06 13:17:19.558395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.048 [2024-12-06 13:17:19.558413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.048 [2024-12-06 13:17:19.558434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.048 [2024-12-06 13:17:19.558488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.048 [2024-12-06 13:17:19.558495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.048 [2024-12-06 13:17:19.558499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.048 [2024-12-06 13:17:19.558512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:13.048 [2024-12-06 13:17:19.558523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.048 [2024-12-06 13:17:19.558540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.048 [2024-12-06 13:17:19.558560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.048 [2024-12-06 13:17:19.558614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.048 [2024-12-06 13:17:19.558621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.048 [2024-12-06 13:17:19.558625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.048 [2024-12-06 13:17:19.558636] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:13.048 [2024-12-06 13:17:19.558642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:13.048 [2024-12-06 13:17:19.558651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:13.048 [2024-12-06 13:17:19.558764] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:13.048 [2024-12-06 13:17:19.558771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:13.048 [2024-12-06 13:17:19.558781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.048 [2024-12-06 13:17:19.558797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.048 [2024-12-06 13:17:19.558819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.048 [2024-12-06 13:17:19.558876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.048 [2024-12-06 13:17:19.558884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.048 [2024-12-06 13:17:19.558888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.048 [2024-12-06 13:17:19.558898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:13.048 [2024-12-06 13:17:19.558909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.558918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.048 [2024-12-06 13:17:19.558926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.048 [2024-12-06 13:17:19.558958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.048 [2024-12-06 13:17:19.559020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.048 [2024-12-06 13:17:19.559027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.048 [2024-12-06 13:17:19.559031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.048 [2024-12-06 13:17:19.559036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.048 [2024-12-06 13:17:19.559041] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:13.049 [2024-12-06 13:17:19.559047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559056] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:13.049 [2024-12-06 13:17:19.559067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.049 [2024-12-06 13:17:19.559113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.049 [2024-12-06 13:17:19.559246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.049 [2024-12-06 13:17:19.559255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.049 [2024-12-06 13:17:19.559260] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559264] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=4096, cccid=0 00:23:13.049 [2024-12-06 13:17:19.559269] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50d600) on tqpair(0x4ccd90): expected_datao=0, payload_size=4096 00:23:13.049 [2024-12-06 13:17:19.559274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559288] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.049 [2024-12-06 13:17:19.559304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.049 [2024-12-06 13:17:19.559307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.049 [2024-12-06 13:17:19.559322] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:13.049 [2024-12-06 13:17:19.559327] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:13.049 [2024-12-06 13:17:19.559332] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:13.049 [2024-12-06 13:17:19.559338] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:13.049 [2024-12-06 13:17:19.559343] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:13.049 [2024-12-06 13:17:19.559348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:13.049 [2024-12-06 13:17:19.559407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.049 [2024-12-06 13:17:19.559477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.049 [2024-12-06 13:17:19.559485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.049 [2024-12-06 13:17:19.559489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.049 [2024-12-06 13:17:19.559507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.049 [2024-12-06 13:17:19.559531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.049 [2024-12-06 13:17:19.559552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.049 [2024-12-06 13:17:19.559573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.049 [2024-12-06 13:17:19.559593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.049 [2024-12-06 13:17:19.559643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d600, cid 0, qid 0 00:23:13.049 [2024-12-06 13:17:19.559651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d780, cid 1, qid 0 00:23:13.049 [2024-12-06 13:17:19.559656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50d900, cid 2, qid 0 00:23:13.049 [2024-12-06 13:17:19.559661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.049 [2024-12-06 13:17:19.559666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dc00, cid 4, qid 0 00:23:13.049 [2024-12-06 13:17:19.559761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.049 [2024-12-06 13:17:19.559768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.049 [2024-12-06 13:17:19.559772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dc00) on tqpair=0x4ccd90 00:23:13.049 [2024-12-06 13:17:19.559783] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:13.049 [2024-12-06 13:17:19.559793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.559818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.559834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:13.049 [2024-12-06 13:17:19.559854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dc00, cid 4, qid 0 00:23:13.049 [2024-12-06 13:17:19.559911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.049 [2024-12-06 13:17:19.559918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.049 [2024-12-06 13:17:19.559922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.559926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dc00) on tqpair=0x4ccd90 00:23:13.049 [2024-12-06 13:17:19.560003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.560017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:13.049 [2024-12-06 13:17:19.560027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.560031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ccd90) 00:23:13.049 [2024-12-06 13:17:19.560039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.049 [2024-12-06 13:17:19.560061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dc00, cid 4, qid 0 00:23:13.049 [2024-12-06 13:17:19.560136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.049 [2024-12-06 13:17:19.560144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.049 [2024-12-06 13:17:19.560148] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.560153] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=4096, cccid=4 00:23:13.049 [2024-12-06 13:17:19.560158] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50dc00) on tqpair(0x4ccd90): expected_datao=0, payload_size=4096 00:23:13.049 [2024-12-06 13:17:19.560163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.560171] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.560175] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.560184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.049 [2024-12-06 13:17:19.560191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.049 [2024-12-06 13:17:19.560194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.049 [2024-12-06 13:17:19.560199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dc00) on tqpair=0x4ccd90 00:23:13.050 [2024-12-06 13:17:19.560218] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:13.050 [2024-12-06 13:17:19.560230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ccd90) 00:23:13.050 [2024-12-06 13:17:19.560262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.050 [2024-12-06 13:17:19.560284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dc00, cid 4, qid 0 00:23:13.050 [2024-12-06 13:17:19.560377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.050 [2024-12-06 13:17:19.560385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.050 [2024-12-06 13:17:19.560389] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560393] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=4096, cccid=4 00:23:13.050 [2024-12-06 13:17:19.560398] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50dc00) on tqpair(0x4ccd90): expected_datao=0, payload_size=4096 00:23:13.050 [2024-12-06 13:17:19.560402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560410] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560414] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.050 [2024-12-06 13:17:19.560429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.050 [2024-12-06 13:17:19.560433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dc00) on tqpair=0x4ccd90 00:23:13.050 [2024-12-06 13:17:19.560454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ccd90) 00:23:13.050 [2024-12-06 13:17:19.560486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.050 [2024-12-06 13:17:19.560507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dc00, cid 4, qid 0 00:23:13.050 [2024-12-06 13:17:19.560577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.050 [2024-12-06 13:17:19.560585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.050 [2024-12-06 13:17:19.560589] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560594] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=4096, cccid=4 00:23:13.050 [2024-12-06 13:17:19.560599] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50dc00) on tqpair(0x4ccd90): expected_datao=0, payload_size=4096 00:23:13.050 [2024-12-06 13:17:19.560604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560611] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560616] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.050 [2024-12-06 13:17:19.560631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.050 [2024-12-06 13:17:19.560635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dc00) on tqpair=0x4ccd90 00:23:13.050 [2024-12-06 13:17:19.560649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560697] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:13.050 [2024-12-06 13:17:19.560702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:13.050 [2024-12-06 13:17:19.560708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:13.050 [2024-12-06 13:17:19.560726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ccd90) 00:23:13.050 [2024-12-06 13:17:19.560739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.050 [2024-12-06 13:17:19.560747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4ccd90) 00:23:13.050 [2024-12-06 13:17:19.560762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.050 [2024-12-06 13:17:19.560789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dc00, cid 4, qid 0 00:23:13.050 [2024-12-06 13:17:19.560796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dd80, cid 5, qid 0 00:23:13.050 [2024-12-06 13:17:19.560871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.050 [2024-12-06 13:17:19.560878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.050 [2024-12-06 13:17:19.560882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dc00) on tqpair=0x4ccd90 00:23:13.050 [2024-12-06 13:17:19.560894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.050 [2024-12-06 13:17:19.560900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.050 [2024-12-06 13:17:19.560904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dd80) on tqpair=0x4ccd90 00:23:13.050 [2024-12-06 13:17:19.560920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.560924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4ccd90) 00:23:13.050 [2024-12-06 13:17:19.560932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.050 [2024-12-06 13:17:19.560967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dd80, cid 5, qid 0 00:23:13.050 [2024-12-06 13:17:19.561031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.050 [2024-12-06 13:17:19.561039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.050 [2024-12-06 13:17:19.561043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.561047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dd80) on tqpair=0x4ccd90 00:23:13.050 [2024-12-06 13:17:19.561058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.561063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4ccd90) 00:23:13.050 [2024-12-06 13:17:19.561070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.050 [2024-12-06 13:17:19.561089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dd80, cid 5, qid 0 00:23:13.050 [2024-12-06 13:17:19.561146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.050 [2024-12-06 13:17:19.561153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.050 [2024-12-06 13:17:19.561157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.050 [2024-12-06 13:17:19.561161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dd80) on tqpair=0x4ccd90 00:23:13.050 [2024-12-06 13:17:19.561172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4ccd90) 00:23:13.051 [2024-12-06 13:17:19.561184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.051 [2024-12-06 13:17:19.561202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dd80, cid 5, qid 0 00:23:13.051 [2024-12-06 13:17:19.561253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.051 [2024-12-06 13:17:19.561261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.051 [2024-12-06 13:17:19.561264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dd80) on tqpair=0x4ccd90 00:23:13.051 [2024-12-06 13:17:19.561289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4ccd90) 00:23:13.051 [2024-12-06 13:17:19.561302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.051 [2024-12-06 13:17:19.561310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ccd90) 00:23:13.051 [2024-12-06 13:17:19.561321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.051 [2024-12-06 13:17:19.561329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x4ccd90) 00:23:13.051 [2024-12-06 13:17:19.561340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.051 [2024-12-06 13:17:19.561351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4ccd90) 00:23:13.051 [2024-12-06 13:17:19.561363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.051 [2024-12-06 13:17:19.561384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dd80, cid 5, qid 0 00:23:13.051 [2024-12-06 13:17:19.561391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50dc00, cid 4, qid 0 00:23:13.051 [2024-12-06 13:17:19.561397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50df00, cid 6, qid 0 00:23:13.051 [2024-12-06 13:17:19.561402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50e080, cid 7, qid 0 00:23:13.051 [2024-12-06 13:17:19.561548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.051 [2024-12-06 13:17:19.561556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.051 [2024-12-06 13:17:19.561560] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561564] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=8192, cccid=5 00:23:13.051 [2024-12-06 13:17:19.561569] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50dd80) on tqpair(0x4ccd90): expected_datao=0, payload_size=8192 00:23:13.051 [2024-12-06 13:17:19.561574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561591] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561596] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.051 [2024-12-06 13:17:19.561609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.051 [2024-12-06 13:17:19.561613] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561617] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=512, cccid=4 00:23:13.051 [2024-12-06 13:17:19.561622] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50dc00) on tqpair(0x4ccd90): expected_datao=0, payload_size=512 00:23:13.051 [2024-12-06 13:17:19.561627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561634] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561638] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.051 [2024-12-06 13:17:19.561650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.051 [2024-12-06 13:17:19.561654] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561658] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=512, cccid=6 00:23:13.051 [2024-12-06 13:17:19.561663] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50df00) on tqpair(0x4ccd90): expected_datao=0, payload_size=512 00:23:13.051 [2024-12-06 13:17:19.561667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561674] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561678] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.051 [2024-12-06 13:17:19.561690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.051 [2024-12-06 13:17:19.561694] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561698] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ccd90): datao=0, datal=4096, cccid=7 00:23:13.051 [2024-12-06 13:17:19.561702] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x50e080) on tqpair(0x4ccd90): expected_datao=0, payload_size=4096 00:23:13.051 [2024-12-06 13:17:19.561707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561714] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561718] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.051 [2024-12-06 13:17:19.561733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.051 [2024-12-06 13:17:19.561737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dd80) on tqpair=0x4ccd90 00:23:13.051 [2024-12-06 13:17:19.561757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.051 [2024-12-06 13:17:19.561764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.051 [2024-12-06 13:17:19.561768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50dc00) on tqpair=0x4ccd90 00:23:13.051 [2024-12-06 13:17:19.561785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.051 [2024-12-06 13:17:19.561791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.051 [2024-12-06 13:17:19.561795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50df00) on tqpair=0x4ccd90 00:23:13.051 [2024-12-06 13:17:19.561808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.051 [2024-12-06 13:17:19.561814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.051 [2024-12-06 13:17:19.561818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.051 [2024-12-06 13:17:19.561822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50e080) on tqpair=0x4ccd90 00:23:13.051 ===================================================== 00:23:13.051 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.051 ===================================================== 00:23:13.051 Controller Capabilities/Features 00:23:13.051 ================================ 00:23:13.051 Vendor ID: 8086 00:23:13.051 Subsystem Vendor ID: 8086 00:23:13.051 Serial Number: SPDK00000000000001 00:23:13.051 Model Number: SPDK bdev Controller 00:23:13.051 Firmware Version: 25.01 00:23:13.051 Recommended Arb Burst: 6 00:23:13.051 IEEE OUI Identifier: e4 d2 5c 00:23:13.051 Multi-path I/O 00:23:13.051 May have multiple subsystem ports: Yes 00:23:13.051 May have multiple controllers: Yes 00:23:13.051 Associated with SR-IOV VF: No 00:23:13.051 Max Data Transfer Size: 131072 00:23:13.051 Max Number of Namespaces: 32 00:23:13.051 Max Number of I/O Queues: 127 00:23:13.051 NVMe Specification Version (VS): 1.3 00:23:13.051 NVMe Specification Version (Identify): 1.3 00:23:13.051 Maximum Queue Entries: 128 00:23:13.051 Contiguous Queues Required: Yes 00:23:13.051 Arbitration Mechanisms Supported 00:23:13.051 Weighted Round Robin: Not Supported 00:23:13.051 Vendor Specific: Not Supported 00:23:13.051 Reset Timeout: 15000 ms 00:23:13.051 Doorbell Stride: 4 bytes 00:23:13.051 NVM Subsystem Reset: Not Supported 00:23:13.051 Command Sets Supported 00:23:13.051 NVM Command Set: Supported 00:23:13.051 Boot Partition: Not Supported 00:23:13.051 Memory Page Size Minimum: 4096 bytes 00:23:13.051 Memory Page Size Maximum: 4096 bytes 00:23:13.051 Persistent Memory Region: Not Supported 00:23:13.051 Optional Asynchronous Events Supported 00:23:13.051 Namespace Attribute Notices: Supported 00:23:13.051 Firmware Activation Notices: Not Supported 00:23:13.051 ANA Change Notices: Not Supported 00:23:13.051 PLE Aggregate Log Change Notices: Not Supported 00:23:13.051 LBA Status Info Alert Notices: Not Supported 00:23:13.051 EGE Aggregate Log Change Notices: Not Supported 00:23:13.051 Normal NVM Subsystem Shutdown event: Not Supported 00:23:13.051 Zone Descriptor Change Notices: Not Supported 00:23:13.051 Discovery Log Change Notices: Not Supported 00:23:13.051 Controller Attributes 00:23:13.051 128-bit Host Identifier: Supported 00:23:13.051 Non-Operational Permissive Mode: Not Supported 00:23:13.051 NVM Sets: Not Supported 00:23:13.051 Read Recovery Levels: Not Supported 00:23:13.051 Endurance Groups: Not Supported 00:23:13.051 Predictable Latency Mode: Not Supported 00:23:13.051 Traffic Based Keep ALive: Not Supported 00:23:13.051 Namespace Granularity: Not Supported 00:23:13.051 SQ Associations: Not Supported 00:23:13.051 UUID List: Not Supported 00:23:13.051 Multi-Domain Subsystem: Not Supported 00:23:13.051 Fixed Capacity Management: Not Supported 00:23:13.051 Variable Capacity Management: Not Supported 00:23:13.051 Delete Endurance Group: Not Supported 00:23:13.051 Delete NVM Set: Not Supported 00:23:13.051 Extended LBA Formats Supported: Not Supported 00:23:13.052 Flexible Data Placement Supported: Not Supported 00:23:13.052 00:23:13.052 Controller Memory Buffer Support 00:23:13.052 ================================ 00:23:13.052 Supported: No 00:23:13.052 00:23:13.052 Persistent Memory Region Support 00:23:13.052 ================================ 00:23:13.052 Supported: No 00:23:13.052 00:23:13.052 Admin Command Set Attributes 00:23:13.052 ============================ 00:23:13.052 Security Send/Receive: Not Supported 00:23:13.052 Format NVM: Not Supported 00:23:13.052 Firmware Activate/Download: Not Supported 00:23:13.052 Namespace Management: Not Supported 00:23:13.052 Device Self-Test: Not Supported 00:23:13.052 Directives: Not Supported 00:23:13.052 NVMe-MI: Not Supported 00:23:13.052 Virtualization Management: Not Supported 00:23:13.052 Doorbell Buffer Config: Not Supported 00:23:13.052 Get LBA Status Capability: Not Supported 00:23:13.052 Command & Feature Lockdown Capability: Not Supported 00:23:13.052 Abort Command Limit: 4 00:23:13.052 Async Event Request Limit: 4 00:23:13.052 Number of Firmware Slots: N/A 00:23:13.052 Firmware Slot 1 Read-Only: N/A 00:23:13.052 Firmware Activation Without Reset: N/A 00:23:13.052 Multiple Update Detection Support: N/A 00:23:13.052 Firmware Update Granularity: No Information Provided 00:23:13.052 Per-Namespace SMART Log: No 00:23:13.052 Asymmetric Namespace Access Log Page: Not Supported 00:23:13.052 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:13.052 Command Effects Log Page: Supported 00:23:13.052 Get Log Page Extended Data: Supported 00:23:13.052 Telemetry Log Pages: Not Supported 00:23:13.052 Persistent Event Log Pages: Not Supported 00:23:13.052 Supported Log Pages Log Page: May Support 00:23:13.052 Commands Supported & Effects Log Page: Not Supported 00:23:13.052 Feature Identifiers & Effects Log Page:May Support 00:23:13.052 NVMe-MI Commands & Effects Log Page: May Support 00:23:13.052 Data Area 4 for Telemetry Log: Not Supported 00:23:13.052 Error Log Page Entries Supported: 128 00:23:13.052 Keep Alive: Supported 00:23:13.052 Keep Alive Granularity: 10000 ms 00:23:13.052 00:23:13.052 NVM Command Set Attributes 00:23:13.052 ========================== 00:23:13.052 Submission Queue Entry Size 00:23:13.052 Max: 64 00:23:13.052 Min: 64 00:23:13.052 Completion Queue Entry Size 00:23:13.052 Max: 16 00:23:13.052 Min: 16 00:23:13.052 Number of Namespaces: 32 00:23:13.052 Compare Command: Supported 00:23:13.052 Write Uncorrectable Command: Not Supported 00:23:13.052 Dataset Management Command: Supported 00:23:13.052 Write Zeroes Command: Supported 00:23:13.052 Set Features Save Field: Not Supported 00:23:13.052 Reservations: Supported 00:23:13.052 Timestamp: Not Supported 00:23:13.052 Copy: Supported 00:23:13.052 Volatile Write Cache: Present 00:23:13.052 Atomic Write Unit (Normal): 1 00:23:13.052 Atomic Write Unit (PFail): 1 00:23:13.052 Atomic Compare & Write Unit: 1 00:23:13.052 Fused Compare & Write: Supported 00:23:13.052 Scatter-Gather List 00:23:13.052 SGL Command Set: Supported 00:23:13.052 SGL Keyed: Supported 00:23:13.052 SGL Bit Bucket Descriptor: Not Supported 00:23:13.052 SGL Metadata Pointer: Not Supported 00:23:13.052 Oversized SGL: Not Supported 00:23:13.052 SGL Metadata Address: Not Supported 00:23:13.052 SGL Offset: Supported 00:23:13.052 Transport SGL Data Block: Not Supported 00:23:13.052 Replay Protected Memory Block: Not Supported 00:23:13.052 00:23:13.052 Firmware Slot Information 00:23:13.052 ========================= 00:23:13.052 Active slot: 1 00:23:13.052 Slot 1 Firmware Revision: 25.01 00:23:13.052 00:23:13.052 00:23:13.052 Commands Supported and Effects 00:23:13.052 ============================== 00:23:13.052 Admin Commands 00:23:13.052 -------------- 00:23:13.052 Get Log Page (02h): Supported 00:23:13.052 Identify (06h): Supported 00:23:13.052 Abort (08h): Supported 00:23:13.052 Set Features (09h): Supported 00:23:13.052 Get Features (0Ah): Supported 00:23:13.052 Asynchronous Event Request (0Ch): Supported 00:23:13.052 Keep Alive (18h): Supported 00:23:13.052 I/O Commands 00:23:13.052 ------------ 00:23:13.052 Flush (00h): Supported LBA-Change 00:23:13.052 Write (01h): Supported LBA-Change 00:23:13.052 Read (02h): Supported 00:23:13.052 Compare (05h): Supported 00:23:13.052 Write Zeroes (08h): Supported LBA-Change 00:23:13.052 Dataset Management (09h): Supported LBA-Change 00:23:13.052 Copy (19h): Supported LBA-Change 00:23:13.052 00:23:13.052 Error Log 00:23:13.052 ========= 00:23:13.052 00:23:13.052 Arbitration 00:23:13.052 =========== 00:23:13.052 Arbitration Burst: 1 00:23:13.052 00:23:13.052 Power Management 00:23:13.052 ================ 00:23:13.052 Number of Power States: 1 00:23:13.052 Current Power State: Power State #0 00:23:13.052 Power State #0: 00:23:13.052 Max Power: 0.00 W 00:23:13.052 Non-Operational State: Operational 00:23:13.052 Entry Latency: Not Reported 00:23:13.052 Exit Latency: Not Reported 00:23:13.052 Relative Read Throughput: 0 00:23:13.052 Relative Read Latency: 0 00:23:13.052 Relative Write Throughput: 0 00:23:13.052 Relative Write Latency: 0 00:23:13.052 Idle Power: Not Reported 00:23:13.052 Active Power: Not Reported 00:23:13.052 Non-Operational Permissive Mode: Not Supported 00:23:13.052 00:23:13.052 Health Information 00:23:13.052 ================== 00:23:13.052 Critical Warnings: 00:23:13.052 Available Spare Space: OK 00:23:13.052 Temperature: OK 00:23:13.052 Device Reliability: OK 00:23:13.052 Read Only: No 00:23:13.052 Volatile Memory Backup: OK 00:23:13.052 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:13.052 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:13.052 Available Spare: 0% 00:23:13.052 Available Spare Threshold: 0% 00:23:13.052 Life Percentage Used:[2024-12-06 13:17:19.561932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.052 [2024-12-06 13:17:19.561939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4ccd90) 00:23:13.052 [2024-12-06 13:17:19.561962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.052 [2024-12-06 13:17:19.561988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50e080, cid 7, qid 0 00:23:13.052 [2024-12-06 13:17:19.562066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.052 [2024-12-06 13:17:19.562079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.052 [2024-12-06 13:17:19.562086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.052 [2024-12-06 13:17:19.562093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50e080) on tqpair=0x4ccd90 00:23:13.052 [2024-12-06 13:17:19.562164] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:13.052 [2024-12-06 13:17:19.562183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d600) on tqpair=0x4ccd90 00:23:13.052 [2024-12-06 13:17:19.562191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.052 [2024-12-06 13:17:19.562198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d780) on tqpair=0x4ccd90 00:23:13.052 [2024-12-06 13:17:19.562203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.052 [2024-12-06 13:17:19.562209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50d900) on tqpair=0x4ccd90 00:23:13.052 [2024-12-06 13:17:19.562213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.052 [2024-12-06 13:17:19.562219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.052 [2024-12-06 13:17:19.562224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.052 [2024-12-06 13:17:19.562235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.052 [2024-12-06 13:17:19.562239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.052 [2024-12-06 13:17:19.562244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.052 [2024-12-06 13:17:19.562252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.052 [2024-12-06 13:17:19.562284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.052 [2024-12-06 13:17:19.562349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.052 [2024-12-06 13:17:19.562357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.052 [2024-12-06 13:17:19.562361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.052 [2024-12-06 13:17:19.562365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.052 [2024-12-06 13:17:19.562374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.052 [2024-12-06 13:17:19.562378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.052 [2024-12-06 13:17:19.562382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.052 [2024-12-06 13:17:19.562390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.052 [2024-12-06 13:17:19.562413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.052 [2024-12-06 13:17:19.562494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.052 [2024-12-06 13:17:19.562501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.052 [2024-12-06 13:17:19.562505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.562515] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:13.053 [2024-12-06 13:17:19.562520] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:13.053 [2024-12-06 13:17:19.562531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.562547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.562565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.562619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.562633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.562637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.562654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.562671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.562691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.562750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.562757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.562761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.562776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.562792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.562810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.562869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.562876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.562880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.562895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.562904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.562911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.562929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.563072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.563206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.563328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.563454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.563568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.563684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.053 [2024-12-06 13:17:19.563800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.053 [2024-12-06 13:17:19.563852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.053 [2024-12-06 13:17:19.563859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.053 [2024-12-06 13:17:19.563863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.053 [2024-12-06 13:17:19.563878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.053 [2024-12-06 13:17:19.563887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.053 [2024-12-06 13:17:19.563894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.054 [2024-12-06 13:17:19.563912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.054 [2024-12-06 13:17:19.563980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.054 [2024-12-06 13:17:19.563988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.054 [2024-12-06 13:17:19.563992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.054 [2024-12-06 13:17:19.563996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.054 [2024-12-06 13:17:19.564007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.054 [2024-12-06 13:17:19.564012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.054 [2024-12-06 13:17:19.564016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.054 [2024-12-06 13:17:19.564024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.054 [2024-12-06 13:17:19.564044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.054 [2024-12-06 13:17:19.564100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.054 [2024-12-06 13:17:19.564107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.054 [2024-12-06 13:17:19.564111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.054 [2024-12-06 13:17:19.564115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.054 [2024-12-06 13:17:19.564126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.054 [2024-12-06 13:17:19.564131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.054 [2024-12-06 13:17:19.564135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.054 [2024-12-06 13:17:19.564142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.054 [2024-12-06 13:17:19.564160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.054 [2024-12-06 13:17:19.564222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.314 [2024-12-06 13:17:19.564229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.314 [2024-12-06 13:17:19.564233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.314 [2024-12-06 13:17:19.564248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.314 [2024-12-06 13:17:19.564264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.314 [2024-12-06 13:17:19.564282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.314 [2024-12-06 13:17:19.564337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.314 [2024-12-06 13:17:19.564345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.314 [2024-12-06 13:17:19.564349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.314 [2024-12-06 13:17:19.564364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.314 [2024-12-06 13:17:19.564380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.314 [2024-12-06 13:17:19.564398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.314 [2024-12-06 13:17:19.564450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.314 [2024-12-06 13:17:19.564457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.314 [2024-12-06 13:17:19.564461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.314 [2024-12-06 13:17:19.564476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.314 [2024-12-06 13:17:19.564492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.314 [2024-12-06 13:17:19.564510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.314 [2024-12-06 13:17:19.564565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.314 [2024-12-06 13:17:19.564572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.314 [2024-12-06 13:17:19.564576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.314 [2024-12-06 13:17:19.564592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.314 [2024-12-06 13:17:19.564608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.314 [2024-12-06 13:17:19.564626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.314 [2024-12-06 13:17:19.564681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.314 [2024-12-06 13:17:19.564688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.314 [2024-12-06 13:17:19.564692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.314 [2024-12-06 13:17:19.564707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.314 [2024-12-06 13:17:19.564715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.314 [2024-12-06 13:17:19.564723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.314 [2024-12-06 13:17:19.564741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.314 [2024-12-06 13:17:19.564794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.314 [2024-12-06 13:17:19.564801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.314 [2024-12-06 13:17:19.564805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.564809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.564820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.564825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.564829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.564836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.564855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.564910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.564917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.564921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.564925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.564936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.564952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.564957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.564965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.564985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.565860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.565867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.565871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.565886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.565895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.565902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.565920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.569960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.569986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.569991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.569996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.315 [2024-12-06 13:17:19.570012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.570018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.570023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ccd90) 00:23:13.315 [2024-12-06 13:17:19.570032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.315 [2024-12-06 13:17:19.570069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x50da80, cid 3, qid 0 00:23:13.315 [2024-12-06 13:17:19.570153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.315 [2024-12-06 13:17:19.570163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.315 [2024-12-06 13:17:19.570167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.315 [2024-12-06 13:17:19.570171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x50da80) on tqpair=0x4ccd90 00:23:13.316 [2024-12-06 13:17:19.570180] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:23:13.316 0% 00:23:13.316 Data Units Read: 0 00:23:13.316 Data Units Written: 0 00:23:13.316 Host Read Commands: 0 00:23:13.316 Host Write Commands: 0 00:23:13.316 Controller Busy Time: 0 minutes 00:23:13.316 Power Cycles: 0 00:23:13.316 Power On Hours: 0 hours 00:23:13.316 Unsafe Shutdowns: 0 00:23:13.316 Unrecoverable Media Errors: 0 00:23:13.316 Lifetime Error Log Entries: 0 00:23:13.316 Warning Temperature Time: 0 minutes 00:23:13.316 Critical Temperature Time: 0 minutes 00:23:13.316 00:23:13.316 Number of Queues 00:23:13.316 ================ 00:23:13.316 Number of I/O Submission Queues: 127 00:23:13.316 Number of I/O Completion Queues: 127 00:23:13.316 00:23:13.316 Active Namespaces 00:23:13.316 ================= 00:23:13.316 Namespace ID:1 00:23:13.316 Error Recovery Timeout: Unlimited 00:23:13.316 Command Set Identifier: NVM (00h) 00:23:13.316 Deallocate: Supported 00:23:13.316 Deallocated/Unwritten Error: Not Supported 00:23:13.316 Deallocated Read Value: Unknown 00:23:13.316 Deallocate in Write Zeroes: Not Supported 00:23:13.316 Deallocated Guard Field: 0xFFFF 00:23:13.316 Flush: Supported 00:23:13.316 Reservation: Supported 00:23:13.316 Namespace Sharing Capabilities: Multiple Controllers 00:23:13.316 Size (in LBAs): 131072 (0GiB) 00:23:13.316 Capacity (in LBAs): 131072 (0GiB) 00:23:13.316 Utilization (in LBAs): 131072 (0GiB) 00:23:13.316 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:13.316 EUI64: ABCDEF0123456789 00:23:13.316 UUID: 5681ab2a-a853-412f-bd8f-22d3eed293ab 00:23:13.316 Thin Provisioning: Not Supported 00:23:13.316 Per-NS Atomic Units: Yes 00:23:13.316 Atomic Boundary Size (Normal): 0 00:23:13.316 Atomic Boundary Size (PFail): 0 00:23:13.316 Atomic Boundary Offset: 0 00:23:13.316 Maximum Single Source Range Length: 65535 00:23:13.316 Maximum Copy Length: 65535 00:23:13.316 Maximum Source Range Count: 1 00:23:13.316 NGUID/EUI64 Never Reused: No 00:23:13.316 Namespace Write Protected: No 00:23:13.316 Number of LBA Formats: 1 00:23:13.316 Current LBA Format: LBA Format #00 00:23:13.316 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:13.316 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.316 rmmod nvme_tcp 00:23:13.316 rmmod nvme_fabrics 00:23:13.316 rmmod nvme_keyring 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 87361 ']' 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 87361 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 87361 ']' 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 87361 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87361 00:23:13.316 killing process with pid 87361 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87361' 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 87361 00:23:13.316 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 87361 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:13.575 13:17:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:13.575 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:13.575 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:13.575 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.575 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.834 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:23:13.835 00:23:13.835 real 0m2.264s 00:23:13.835 user 0m4.521s 00:23:13.835 sys 0m0.681s 00:23:13.835 ************************************ 00:23:13.835 END TEST nvmf_identify 00:23:13.835 ************************************ 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.835 ************************************ 00:23:13.835 START TEST nvmf_perf 00:23:13.835 ************************************ 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:13.835 * Looking for test storage... 00:23:13.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:13.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.835 --rc genhtml_branch_coverage=1 00:23:13.835 --rc genhtml_function_coverage=1 00:23:13.835 --rc genhtml_legend=1 00:23:13.835 --rc geninfo_all_blocks=1 00:23:13.835 --rc geninfo_unexecuted_blocks=1 00:23:13.835 00:23:13.835 ' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:13.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.835 --rc genhtml_branch_coverage=1 00:23:13.835 --rc genhtml_function_coverage=1 00:23:13.835 --rc genhtml_legend=1 00:23:13.835 --rc geninfo_all_blocks=1 00:23:13.835 --rc geninfo_unexecuted_blocks=1 00:23:13.835 00:23:13.835 ' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:13.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.835 --rc genhtml_branch_coverage=1 00:23:13.835 --rc genhtml_function_coverage=1 00:23:13.835 --rc genhtml_legend=1 00:23:13.835 --rc geninfo_all_blocks=1 00:23:13.835 --rc geninfo_unexecuted_blocks=1 00:23:13.835 00:23:13.835 ' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:13.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.835 --rc genhtml_branch_coverage=1 00:23:13.835 --rc genhtml_function_coverage=1 00:23:13.835 --rc genhtml_legend=1 00:23:13.835 --rc geninfo_all_blocks=1 00:23:13.835 --rc geninfo_unexecuted_blocks=1 00:23:13.835 00:23:13.835 ' 00:23:13.835 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.094 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.095 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:14.095 Cannot find device "nvmf_init_br" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:14.095 Cannot find device "nvmf_init_br2" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:14.095 Cannot find device "nvmf_tgt_br" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:14.095 Cannot find device "nvmf_tgt_br2" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:14.095 Cannot find device "nvmf_init_br" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:14.095 Cannot find device "nvmf_init_br2" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:14.095 Cannot find device "nvmf_tgt_br" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:14.095 Cannot find device "nvmf_tgt_br2" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:14.095 Cannot find device "nvmf_br" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:14.095 Cannot find device "nvmf_init_if" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:14.095 Cannot find device "nvmf_init_if2" 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:14.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:14.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:14.095 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:14.096 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:14.096 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:14.096 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:14.096 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:14.096 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:14.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:14.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:23:14.355 00:23:14.355 --- 10.0.0.3 ping statistics --- 00:23:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.355 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:14.355 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:14.355 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:23:14.355 00:23:14.355 --- 10.0.0.4 ping statistics --- 00:23:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.355 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:14.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:14.355 00:23:14.355 --- 10.0.0.1 ping statistics --- 00:23:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.355 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:14.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:23:14.355 00:23:14.355 --- 10.0.0.2 ping statistics --- 00:23:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.355 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:14.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=87621 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 87621 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 87621 ']' 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.355 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.356 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.356 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.356 13:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:14.356 [2024-12-06 13:17:20.858787] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:14.356 [2024-12-06 13:17:20.859079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.614 [2024-12-06 13:17:21.010137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.614 [2024-12-06 13:17:21.050724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.614 [2024-12-06 13:17:21.050986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.614 [2024-12-06 13:17:21.051205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.614 [2024-12-06 13:17:21.051372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.614 [2024-12-06 13:17:21.051422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.614 [2024-12-06 13:17:21.052517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.614 [2024-12-06 13:17:21.052647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.614 [2024-12-06 13:17:21.053262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.614 [2024-12-06 13:17:21.053314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:14.873 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:15.132 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:15.132 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:15.390 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:15.390 13:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:15.957 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:15.957 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:15.957 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:15.957 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:15.957 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:15.957 [2024-12-06 13:17:22.478571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.217 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.519 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:16.519 13:17:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.778 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:16.778 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:17.035 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:17.293 [2024-12-06 13:17:23.632031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.294 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:17.552 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:17.552 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:17.552 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:17.552 13:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:18.926 Initializing NVMe Controllers 00:23:18.926 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:18.926 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:18.926 Initialization complete. Launching workers. 00:23:18.926 ======================================================== 00:23:18.926 Latency(us) 00:23:18.926 Device Information : IOPS MiB/s Average min max 00:23:18.926 PCIE (0000:00:10.0) NSID 1 from core 0: 24736.97 96.63 1293.65 298.71 7910.89 00:23:18.926 ======================================================== 00:23:18.926 Total : 24736.97 96.63 1293.65 298.71 7910.89 00:23:18.926 00:23:18.926 13:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:19.859 Initializing NVMe Controllers 00:23:19.859 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.859 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.859 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:19.859 Initialization complete. Launching workers. 00:23:19.859 ======================================================== 00:23:19.859 Latency(us) 00:23:19.859 Device Information : IOPS MiB/s Average min max 00:23:19.859 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3126.60 12.21 319.42 116.45 6286.46 00:23:19.859 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.51 0.48 8161.88 4983.27 12044.73 00:23:19.859 ======================================================== 00:23:19.859 Total : 3249.11 12.69 615.14 116.45 12044.73 00:23:19.859 00:23:20.117 13:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:21.511 Initializing NVMe Controllers 00:23:21.511 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.511 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:21.511 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:21.511 Initialization complete. Launching workers. 00:23:21.511 ======================================================== 00:23:21.511 Latency(us) 00:23:21.511 Device Information : IOPS MiB/s Average min max 00:23:21.511 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5913.84 23.10 5412.26 565.83 15615.48 00:23:21.511 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2650.58 10.35 12182.02 4816.87 20459.38 00:23:21.511 ======================================================== 00:23:21.511 Total : 8564.43 33.45 7507.42 565.83 20459.38 00:23:21.511 00:23:21.511 13:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:21.511 13:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:24.038 Initializing NVMe Controllers 00:23:24.038 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.038 Controller IO queue size 128, less than required. 00:23:24.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:24.038 Controller IO queue size 128, less than required. 00:23:24.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:24.038 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:24.038 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:24.038 Initialization complete. Launching workers. 00:23:24.038 ======================================================== 00:23:24.038 Latency(us) 00:23:24.038 Device Information : IOPS MiB/s Average min max 00:23:24.038 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1486.41 371.60 87154.52 58563.48 163989.51 00:23:24.038 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 546.37 136.59 243973.53 86088.57 398992.46 00:23:24.038 ======================================================== 00:23:24.038 Total : 2032.78 508.19 129303.96 58563.48 398992.46 00:23:24.038 00:23:24.038 13:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:23:24.296 Initializing NVMe Controllers 00:23:24.296 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.296 Controller IO queue size 128, less than required. 00:23:24.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:24.296 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:24.296 Controller IO queue size 128, less than required. 00:23:24.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:24.296 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:24.296 WARNING: Some requested NVMe devices were skipped 00:23:24.296 No valid NVMe controllers or AIO or URING devices found 00:23:24.296 13:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:23:26.850 Initializing NVMe Controllers 00:23:26.850 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.850 Controller IO queue size 128, less than required. 00:23:26.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.850 Controller IO queue size 128, less than required. 00:23:26.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.850 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:26.850 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:26.850 Initialization complete. Launching workers. 00:23:26.850 00:23:26.850 ==================== 00:23:26.850 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:26.850 TCP transport: 00:23:26.850 polls: 7316 00:23:26.850 idle_polls: 4316 00:23:26.850 sock_completions: 3000 00:23:26.850 nvme_completions: 5039 00:23:26.850 submitted_requests: 7544 00:23:26.850 queued_requests: 1 00:23:26.850 00:23:26.850 ==================== 00:23:26.850 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:26.850 TCP transport: 00:23:26.850 polls: 9800 00:23:26.850 idle_polls: 6716 00:23:26.850 sock_completions: 3084 00:23:26.850 nvme_completions: 5821 00:23:26.850 submitted_requests: 8670 00:23:26.850 queued_requests: 1 00:23:26.850 ======================================================== 00:23:26.850 Latency(us) 00:23:26.850 Device Information : IOPS MiB/s Average min max 00:23:26.850 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1259.29 314.82 104717.75 69325.77 166787.61 00:23:26.850 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1454.76 363.69 89849.08 23064.86 150271.51 00:23:26.850 ======================================================== 00:23:26.850 Total : 2714.05 678.51 96747.99 23064.86 166787.61 00:23:26.850 00:23:27.108 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:27.108 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:27.365 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.366 rmmod nvme_tcp 00:23:27.366 rmmod nvme_fabrics 00:23:27.366 rmmod nvme_keyring 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 87621 ']' 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 87621 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 87621 ']' 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 87621 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87621 00:23:27.366 killing process with pid 87621 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87621' 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 87621 00:23:27.366 13:17:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 87621 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:27.932 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:23:28.190 00:23:28.190 real 0m14.433s 00:23:28.190 user 0m52.553s 00:23:28.190 sys 0m3.607s 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:28.190 ************************************ 00:23:28.190 END TEST nvmf_perf 00:23:28.190 ************************************ 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.190 ************************************ 00:23:28.190 START TEST nvmf_fio_host 00:23:28.190 ************************************ 00:23:28.190 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:28.451 * Looking for test storage... 00:23:28.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:28.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.451 --rc genhtml_branch_coverage=1 00:23:28.451 --rc genhtml_function_coverage=1 00:23:28.451 --rc genhtml_legend=1 00:23:28.451 --rc geninfo_all_blocks=1 00:23:28.451 --rc geninfo_unexecuted_blocks=1 00:23:28.451 00:23:28.451 ' 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:28.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.451 --rc genhtml_branch_coverage=1 00:23:28.451 --rc genhtml_function_coverage=1 00:23:28.451 --rc genhtml_legend=1 00:23:28.451 --rc geninfo_all_blocks=1 00:23:28.451 --rc geninfo_unexecuted_blocks=1 00:23:28.451 00:23:28.451 ' 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:28.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.451 --rc genhtml_branch_coverage=1 00:23:28.451 --rc genhtml_function_coverage=1 00:23:28.451 --rc genhtml_legend=1 00:23:28.451 --rc geninfo_all_blocks=1 00:23:28.451 --rc geninfo_unexecuted_blocks=1 00:23:28.451 00:23:28.451 ' 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:28.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.451 --rc genhtml_branch_coverage=1 00:23:28.451 --rc genhtml_function_coverage=1 00:23:28.451 --rc genhtml_legend=1 00:23:28.451 --rc geninfo_all_blocks=1 00:23:28.451 --rc geninfo_unexecuted_blocks=1 00:23:28.451 00:23:28.451 ' 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.451 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:28.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:28.452 Cannot find device "nvmf_init_br" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:28.452 Cannot find device "nvmf_init_br2" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:28.452 Cannot find device "nvmf_tgt_br" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.452 Cannot find device "nvmf_tgt_br2" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:28.452 Cannot find device "nvmf_init_br" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:28.452 Cannot find device "nvmf_init_br2" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:28.452 Cannot find device "nvmf_tgt_br" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:28.452 Cannot find device "nvmf_tgt_br2" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:28.452 Cannot find device "nvmf_br" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:28.452 Cannot find device "nvmf_init_if" 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:23:28.452 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:28.712 Cannot find device "nvmf_init_if2" 00:23:28.712 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:23:28.712 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.712 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:23:28.712 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.712 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:23:28.712 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:28.712 13:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:28.712 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:28.712 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:23:28.712 00:23:28.712 --- 10.0.0.3 ping statistics --- 00:23:28.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.712 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:28.712 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:28.712 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:23:28.712 00:23:28.712 --- 10.0.0.4 ping statistics --- 00:23:28.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.712 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:28.712 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:28.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:28.969 00:23:28.969 --- 10.0.0.1 ping statistics --- 00:23:28.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.969 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:28.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:23:28.969 00:23:28.969 --- 10.0.0.2 ping statistics --- 00:23:28.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.969 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88140 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88140 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 88140 ']' 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.969 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.969 [2024-12-06 13:17:35.333473] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:28.969 [2024-12-06 13:17:35.333558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.969 [2024-12-06 13:17:35.484672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.227 [2024-12-06 13:17:35.523259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.227 [2024-12-06 13:17:35.523526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.227 [2024-12-06 13:17:35.523682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.227 [2024-12-06 13:17:35.523829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.227 [2024-12-06 13:17:35.523871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.227 [2024-12-06 13:17:35.524818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.227 [2024-12-06 13:17:35.524972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.227 [2024-12-06 13:17:35.525045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.227 [2024-12-06 13:17:35.525044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.227 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.227 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:29.227 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:29.485 [2024-12-06 13:17:35.886200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.485 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:29.485 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.485 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.485 13:17:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:30.051 Malloc1 00:23:30.051 13:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.051 13:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:30.629 13:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:30.887 [2024-12-06 13:17:37.239866] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:30.887 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:31.144 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:31.144 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:31.145 13:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:31.403 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:31.403 fio-3.35 00:23:31.403 Starting 1 thread 00:23:33.960 00:23:33.960 test: (groupid=0, jobs=1): err= 0: pid=88258: Fri Dec 6 13:17:40 2024 00:23:33.960 read: IOPS=6114, BW=23.9MiB/s (25.0MB/s)(47.9MiB/2005msec) 00:23:33.960 slat (usec): min=2, max=430, avg= 6.33, stdev= 5.90 00:23:33.960 clat (usec): min=4514, max=22529, avg=11989.67, stdev=3759.44 00:23:33.960 lat (usec): min=4518, max=22538, avg=11996.00, stdev=3761.72 00:23:33.960 clat percentiles (usec): 00:23:33.960 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7701], 00:23:33.960 | 30.00th=[ 8160], 40.00th=[11469], 50.00th=[12518], 60.00th=[13435], 00:23:33.960 | 70.00th=[14746], 80.00th=[15795], 90.00th=[16909], 95.00th=[17695], 00:23:33.960 | 99.00th=[18744], 99.50th=[19006], 99.90th=[21365], 99.95th=[21890], 00:23:33.960 | 99.99th=[22414] 00:23:33.960 bw ( KiB/s): min=19448, max=34976, per=99.68%, avg=24380.00, stdev=7153.36, samples=4 00:23:33.960 iops : min= 4862, max= 8744, avg=6095.00, stdev=1788.34, samples=4 00:23:33.960 write: IOPS=6092, BW=23.8MiB/s (25.0MB/s)(47.7MiB/2005msec); 0 zone resets 00:23:33.960 slat (usec): min=2, max=371, avg= 6.55, stdev= 4.87 00:23:33.960 clat (usec): min=3755, max=15425, avg=8863.72, stdev=1813.83 00:23:33.960 lat (usec): min=3782, max=15432, avg=8870.27, stdev=1816.11 00:23:33.960 clat percentiles (usec): 00:23:33.960 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 6652], 20.00th=[ 6980], 00:23:33.960 | 30.00th=[ 7308], 40.00th=[ 8094], 50.00th=[ 8979], 60.00th=[ 9634], 00:23:33.960 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11207], 95.00th=[11600], 00:23:33.960 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14746], 99.95th=[14877], 00:23:33.960 | 99.99th=[15401] 00:23:33.960 bw ( KiB/s): min=20344, max=34304, per=99.78%, avg=24316.00, stdev=6700.28, samples=4 00:23:33.960 iops : min= 5086, max= 8576, avg=6079.00, stdev=1675.07, samples=4 00:23:33.960 lat (msec) : 4=0.02%, 10=53.12%, 20=46.79%, 50=0.08% 00:23:33.960 cpu : usr=72.46%, sys=18.61%, ctx=37, majf=0, minf=7 00:23:33.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:33.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:33.960 issued rwts: total=12260,12215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:33.960 00:23:33.960 Run status group 0 (all jobs): 00:23:33.960 READ: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2005-2005msec 00:23:33.960 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.7MiB (50.0MB), run=2005-2005msec 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:33.960 13:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:33.960 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:33.960 fio-3.35 00:23:33.960 Starting 1 thread 00:23:36.492 00:23:36.492 test: (groupid=0, jobs=1): err= 0: pid=88307: Fri Dec 6 13:17:42 2024 00:23:36.492 read: IOPS=6718, BW=105MiB/s (110MB/s)(211MiB/2008msec) 00:23:36.492 slat (usec): min=3, max=122, avg= 4.02, stdev= 1.98 00:23:36.492 clat (msec): min=2, max=218, avg=11.27, stdev=15.61 00:23:36.492 lat (msec): min=2, max=218, avg=11.28, stdev=15.61 00:23:36.492 clat percentiles (msec): 00:23:36.492 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:23:36.492 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:23:36.492 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 15], 00:23:36.492 | 99.00th=[ 18], 99.50th=[ 215], 99.90th=[ 220], 99.95th=[ 220], 00:23:36.492 | 99.99th=[ 220] 00:23:36.492 bw ( KiB/s): min=32352, max=65376, per=49.90%, avg=53640.00, stdev=15080.96, samples=4 00:23:36.492 iops : min= 2022, max= 4086, avg=3352.50, stdev=942.56, samples=4 00:23:36.492 write: IOPS=3989, BW=62.3MiB/s (65.4MB/s)(110MiB/1768msec); 0 zone resets 00:23:36.492 slat (usec): min=37, max=226, avg=39.84, stdev= 5.27 00:23:36.492 clat (msec): min=6, max=222, avg=13.85, stdev=18.06 00:23:36.492 lat (msec): min=6, max=222, avg=13.89, stdev=18.06 00:23:36.492 clat percentiles (msec): 00:23:36.492 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:23:36.492 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:23:36.492 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 17], 95.00th=[ 18], 00:23:36.492 | 99.00th=[ 20], 99.50th=[ 220], 99.90th=[ 222], 99.95th=[ 222], 00:23:36.492 | 99.99th=[ 224] 00:23:36.492 bw ( KiB/s): min=34048, max=68928, per=87.36%, avg=55768.00, stdev=15666.61, samples=4 00:23:36.492 iops : min= 2128, max= 4308, avg=3485.50, stdev=979.16, samples=4 00:23:36.492 lat (msec) : 4=0.18%, 10=38.24%, 20=60.88%, 50=0.08%, 250=0.62% 00:23:36.492 cpu : usr=72.80%, sys=18.63%, ctx=22, majf=0, minf=24 00:23:36.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:23:36.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:36.492 issued rwts: total=13490,7054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:36.492 00:23:36.492 Run status group 0 (all jobs): 00:23:36.492 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=211MiB (221MB), run=2008-2008msec 00:23:36.492 WRITE: bw=62.3MiB/s (65.4MB/s), 62.3MiB/s-62.3MiB/s (65.4MB/s-65.4MB/s), io=110MiB (116MB), run=1768-1768msec 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.492 13:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.492 rmmod nvme_tcp 00:23:36.492 rmmod nvme_fabrics 00:23:36.492 rmmod nvme_keyring 00:23:36.751 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 88140 ']' 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 88140 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 88140 ']' 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 88140 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88140 00:23:36.752 killing process with pid 88140 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88140' 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 88140 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 88140 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:36.752 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:23:37.010 00:23:37.010 real 0m8.779s 00:23:37.010 user 0m35.329s 00:23:37.010 sys 0m2.247s 00:23:37.010 ************************************ 00:23:37.010 END TEST nvmf_fio_host 00:23:37.010 ************************************ 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.010 ************************************ 00:23:37.010 START TEST nvmf_failover 00:23:37.010 ************************************ 00:23:37.010 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:37.270 * Looking for test storage... 00:23:37.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:37.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.270 --rc genhtml_branch_coverage=1 00:23:37.270 --rc genhtml_function_coverage=1 00:23:37.270 --rc genhtml_legend=1 00:23:37.270 --rc geninfo_all_blocks=1 00:23:37.270 --rc geninfo_unexecuted_blocks=1 00:23:37.270 00:23:37.270 ' 00:23:37.270 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:37.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.271 --rc genhtml_branch_coverage=1 00:23:37.271 --rc genhtml_function_coverage=1 00:23:37.271 --rc genhtml_legend=1 00:23:37.271 --rc geninfo_all_blocks=1 00:23:37.271 --rc geninfo_unexecuted_blocks=1 00:23:37.271 00:23:37.271 ' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:37.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.271 --rc genhtml_branch_coverage=1 00:23:37.271 --rc genhtml_function_coverage=1 00:23:37.271 --rc genhtml_legend=1 00:23:37.271 --rc geninfo_all_blocks=1 00:23:37.271 --rc geninfo_unexecuted_blocks=1 00:23:37.271 00:23:37.271 ' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:37.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.271 --rc genhtml_branch_coverage=1 00:23:37.271 --rc genhtml_function_coverage=1 00:23:37.271 --rc genhtml_legend=1 00:23:37.271 --rc geninfo_all_blocks=1 00:23:37.271 --rc geninfo_unexecuted_blocks=1 00:23:37.271 00:23:37.271 ' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.271 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:37.271 Cannot find device "nvmf_init_br" 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:23:37.271 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:37.272 Cannot find device "nvmf_init_br2" 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:37.272 Cannot find device "nvmf_tgt_br" 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.272 Cannot find device "nvmf_tgt_br2" 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:37.272 Cannot find device "nvmf_init_br" 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:37.272 Cannot find device "nvmf_init_br2" 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:37.272 Cannot find device "nvmf_tgt_br" 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:37.272 Cannot find device "nvmf_tgt_br2" 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:23:37.272 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:37.530 Cannot find device "nvmf_br" 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:37.530 Cannot find device "nvmf_init_if" 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:37.530 Cannot find device "nvmf_init_if2" 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.530 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:37.531 13:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.531 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:37.531 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.531 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.531 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.531 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:37.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:23:37.789 00:23:37.789 --- 10.0.0.3 ping statistics --- 00:23:37.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.789 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:37.789 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:37.789 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:23:37.789 00:23:37.789 --- 10.0.0.4 ping statistics --- 00:23:37.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.789 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:37.789 00:23:37.789 --- 10.0.0.1 ping statistics --- 00:23:37.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.789 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:37.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:23:37.789 00:23:37.789 --- 10.0.0.2 ping statistics --- 00:23:37.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.789 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.789 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=88581 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 88581 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88581 ']' 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.790 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:37.790 [2024-12-06 13:17:44.190246] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:37.790 [2024-12-06 13:17:44.190348] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.048 [2024-12-06 13:17:44.343723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:38.048 [2024-12-06 13:17:44.377192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.048 [2024-12-06 13:17:44.377436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.048 [2024-12-06 13:17:44.377586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.048 [2024-12-06 13:17:44.377727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.048 [2024-12-06 13:17:44.377867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.048 [2024-12-06 13:17:44.378703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.048 [2024-12-06 13:17:44.378762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.048 [2024-12-06 13:17:44.378766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.048 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.048 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:38.048 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.048 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.048 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.048 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.048 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:38.615 [2024-12-06 13:17:44.963231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.615 13:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:38.873 Malloc0 00:23:38.873 13:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.437 13:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.695 13:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:39.953 [2024-12-06 13:17:46.248465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:39.953 13:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:40.211 [2024-12-06 13:17:46.592620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:40.211 13:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:23:40.778 [2024-12-06 13:17:47.073084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:23:40.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88691 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88691 /var/tmp/bdevperf.sock 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88691 ']' 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.778 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:41.035 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.035 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:41.035 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.292 NVMe0n1 00:23:41.292 13:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.856 00:23:41.856 13:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88723 00:23:41.856 13:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.856 13:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:42.788 13:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:43.046 [2024-12-06 13:17:49.519255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.046 [2024-12-06 13:17:49.519613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 [2024-12-06 13:17:49.519956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85a90 is same with the state(6) to be set 00:23:43.047 13:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:46.324 13:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:46.583 00:23:46.583 13:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:47.149 [2024-12-06 13:17:53.544444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86540 is same with the state(6) to be set 00:23:47.149 [2024-12-06 13:17:53.544505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86540 is same with the state(6) to be set 00:23:47.149 [2024-12-06 13:17:53.544517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86540 is same with the state(6) to be set 00:23:47.149 [2024-12-06 13:17:53.544525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86540 is same with the state(6) to be set 00:23:47.149 [2024-12-06 13:17:53.544534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86540 is same with the state(6) to be set 00:23:47.149 [2024-12-06 13:17:53.544543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86540 is same with the state(6) to be set 00:23:47.149 [2024-12-06 13:17:53.544551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b86540 is same with the state(6) to be set 00:23:47.149 13:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:50.429 13:17:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:50.429 [2024-12-06 13:17:56.930429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:50.687 13:17:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:51.622 13:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:23:51.882 [2024-12-06 13:17:58.254874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.254925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.254937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.254962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.254972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.254981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.254989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.254997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.882 [2024-12-06 13:17:58.255389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.255994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.256002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.256010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.256019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 [2024-12-06 13:17:58.256027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4cf50 is same with the state(6) to be set 00:23:51.883 13:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88723 00:23:57.183 { 00:23:57.183 "results": [ 00:23:57.183 { 00:23:57.183 "job": "NVMe0n1", 00:23:57.183 "core_mask": "0x1", 00:23:57.183 "workload": "verify", 00:23:57.183 "status": "finished", 00:23:57.183 "verify_range": { 00:23:57.183 "start": 0, 00:23:57.183 "length": 16384 00:23:57.183 }, 00:23:57.183 "queue_depth": 128, 00:23:57.183 "io_size": 4096, 00:23:57.183 "runtime": 15.005023, 00:23:57.183 "iops": 7741.8741710692475, 00:23:57.183 "mibps": 30.241695980739248, 00:23:57.183 "io_failed": 3317, 00:23:57.183 "io_timeout": 0, 00:23:57.183 "avg_latency_us": 16038.722153259012, 00:23:57.183 "min_latency_us": 852.7127272727273, 00:23:57.183 "max_latency_us": 24427.054545454546 00:23:57.183 } 00:23:57.183 ], 00:23:57.183 "core_count": 1 00:23:57.183 } 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88691 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88691 ']' 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88691 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88691 00:23:57.183 killing process with pid 88691 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88691' 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88691 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88691 00:23:57.183 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:57.183 [2024-12-06 13:17:47.164342] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:57.183 [2024-12-06 13:17:47.164497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88691 ] 00:23:57.183 [2024-12-06 13:17:47.314789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.183 [2024-12-06 13:17:47.347646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.183 Running I/O for 15 seconds... 00:23:57.183 7817.00 IOPS, 30.54 MiB/s [2024-12-06T13:18:03.711Z] [2024-12-06 13:17:49.520196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.520969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.520985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.521001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.521025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.521041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.521060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.521077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.521091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.521108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.521122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.521137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.521151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-12-06 13:17:49.521167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-12-06 13:17:49.521181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.521975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.521991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-12-06 13:17:49.522401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-12-06 13:17:49.522415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.522978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.522992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.185 [2024-12-06 13:17:49.523675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.185 [2024-12-06 13:17:49.523697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.523982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.523996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.186 [2024-12-06 13:17:49.524556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942b80 is same with the state(6) to be set 00:23:57.186 [2024-12-06 13:17:49.524591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.186 [2024-12-06 13:17:49.524608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.186 [2024-12-06 13:17:49.524621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75152 len:8 PRP1 0x0 PRP2 0x0 00:23:57.186 [2024-12-06 13:17:49.524635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524695] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:23:57.186 [2024-12-06 13:17:49.524761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.186 [2024-12-06 13:17:49.524785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.186 [2024-12-06 13:17:49.524814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.186 [2024-12-06 13:17:49.524842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.186 [2024-12-06 13:17:49.524871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:49.524885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:57.186 [2024-12-06 13:17:49.524968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6f50 (9): Bad file descriptor 00:23:57.186 [2024-12-06 13:17:49.529048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:57.186 [2024-12-06 13:17:49.556380] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:57.186 7096.50 IOPS, 27.72 MiB/s [2024-12-06T13:18:03.714Z] 6598.67 IOPS, 25.78 MiB/s [2024-12-06T13:18:03.714Z] 7068.00 IOPS, 27.61 MiB/s [2024-12-06T13:18:03.714Z] 7265.20 IOPS, 28.38 MiB/s [2024-12-06T13:18:03.714Z] [2024-12-06 13:17:53.545376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.186 [2024-12-06 13:17:53.545447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:53.545538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.186 [2024-12-06 13:17:53.545575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:53.545610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.186 [2024-12-06 13:17:53.545640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.186 [2024-12-06 13:17:53.545673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.545701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.545733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.545761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.545792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.545822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.545852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.545880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.545911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.545961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.545999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.546918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.546973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.547037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.547095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.547192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.547254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.547313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.547368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.187 [2024-12-06 13:17:53.547433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.187 [2024-12-06 13:17:53.547492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.187 [2024-12-06 13:17:53.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.187 [2024-12-06 13:17:53.547612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.187 [2024-12-06 13:17:53.547672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.187 [2024-12-06 13:17:53.547702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.547730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.547761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.547789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.547819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.547847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.547879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.547908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.547975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.548912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.548961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.188 [2024-12-06 13:17:53.549971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.549999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.188 [2024-12-06 13:17:53.550022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.550048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.188 [2024-12-06 13:17:53.550070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.550094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.188 [2024-12-06 13:17:53.550117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.188 [2024-12-06 13:17:53.550142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.550166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.550213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.550260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.550324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.550378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.550958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.550986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.551035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.551083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.551144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.189 [2024-12-06 13:17:53.551205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.551933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.551978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.552012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.552045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.552071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.552099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.552123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.552150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.189 [2024-12-06 13:17:53.552173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.189 [2024-12-06 13:17:53.552201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.190 [2024-12-06 13:17:53.552243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.190 [2024-12-06 13:17:53.552296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.190 [2024-12-06 13:17:53.552353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.190 [2024-12-06 13:17:53.552406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.190 [2024-12-06 13:17:53.552457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.190 [2024-12-06 13:17:53.552509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:53.552567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:53.552619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:53.552669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:53.552722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:53.552774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:53.552825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:53.552882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.552970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.190 [2024-12-06 13:17:53.553012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.190 [2024-12-06 13:17:53.553035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53192 len:8 PRP1 0x0 PRP2 0x0 00:23:57.190 [2024-12-06 13:17:53.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.553143] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:23:57.190 [2024-12-06 13:17:53.553265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.190 [2024-12-06 13:17:53.553301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.553327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.190 [2024-12-06 13:17:53.553354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.553381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.190 [2024-12-06 13:17:53.553405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.553430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.190 [2024-12-06 13:17:53.553455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:53.553479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:57.190 [2024-12-06 13:17:53.553575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6f50 (9): Bad file descriptor 00:23:57.190 [2024-12-06 13:17:53.558201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:57.190 [2024-12-06 13:17:53.591388] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:57.190 7395.67 IOPS, 28.89 MiB/s [2024-12-06T13:18:03.718Z] 7581.43 IOPS, 29.61 MiB/s [2024-12-06T13:18:03.718Z] 7709.75 IOPS, 30.12 MiB/s [2024-12-06T13:18:03.718Z] 7731.22 IOPS, 30.20 MiB/s [2024-12-06T13:18:03.718Z] [2024-12-06 13:17:58.256282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.190 [2024-12-06 13:17:58.256928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.190 [2024-12-06 13:17:58.256960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.256978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.256994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.257969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.257988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.191 [2024-12-06 13:17:58.258276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.191 [2024-12-06 13:17:58.258290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.192 [2024-12-06 13:17:58.258328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.192 [2024-12-06 13:17:58.258361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.192 [2024-12-06 13:17:58.258392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.192 [2024-12-06 13:17:58.258423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.192 [2024-12-06 13:17:58.258469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.192 [2024-12-06 13:17:58.258500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.258967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.258983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.192 [2024-12-06 13:17:58.259430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.192 [2024-12-06 13:17:58.259445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.259979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.259993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.260023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.260053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.260083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.260121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.260156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.193 [2024-12-06 13:17:58.260190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.193 [2024-12-06 13:17:58.260600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.193 [2024-12-06 13:17:58.260657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.193 [2024-12-06 13:17:58.260669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105800 len:8 PRP1 0x0 PRP2 0x0 00:23:57.193 [2024-12-06 13:17:58.260683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260741] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:23:57.193 [2024-12-06 13:17:58.260816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.193 [2024-12-06 13:17:58.260850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.193 [2024-12-06 13:17:58.260922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.193 [2024-12-06 13:17:58.260968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.193 [2024-12-06 13:17:58.260995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.194 [2024-12-06 13:17:58.261020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.194 [2024-12-06 13:17:58.261045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.194 [2024-12-06 13:17:58.261065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:57.194 [2024-12-06 13:17:58.261110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6f50 (9): Bad file descriptor 00:23:57.194 [2024-12-06 13:17:58.265210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:57.194 [2024-12-06 13:17:58.294507] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:57.194 7670.20 IOPS, 29.96 MiB/s [2024-12-06T13:18:03.722Z] 7706.82 IOPS, 30.10 MiB/s [2024-12-06T13:18:03.722Z] 7697.08 IOPS, 30.07 MiB/s [2024-12-06T13:18:03.722Z] 7668.54 IOPS, 29.96 MiB/s [2024-12-06T13:18:03.722Z] 7698.14 IOPS, 30.07 MiB/s 00:23:57.194 Latency(us) 00:23:57.194 [2024-12-06T13:18:03.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.194 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:57.194 Verification LBA range: start 0x0 length 0x4000 00:23:57.194 NVMe0n1 : 15.01 7741.87 30.24 221.06 0.00 16038.72 852.71 24427.05 00:23:57.194 [2024-12-06T13:18:03.722Z] =================================================================================================================== 00:23:57.194 [2024-12-06T13:18:03.722Z] Total : 7741.87 30.24 221.06 0.00 16038.72 852.71 24427.05 00:23:57.194 Received shutdown signal, test time was about 15.000000 seconds 00:23:57.194 00:23:57.194 Latency(us) 00:23:57.194 [2024-12-06T13:18:03.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.194 [2024-12-06T13:18:03.722Z] =================================================================================================================== 00:23:57.194 [2024-12-06T13:18:03.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88923 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88923 /var/tmp/bdevperf.sock 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88923 ']' 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.194 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:57.452 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.452 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:57.452 13:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:58.018 [2024-12-06 13:18:04.243584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:58.018 13:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:23:58.276 [2024-12-06 13:18:04.624034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:23:58.276 13:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.534 NVMe0n1 00:23:58.534 13:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:59.097 00:23:59.097 13:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:59.355 00:23:59.355 13:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.355 13:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:59.612 13:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:00.178 13:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:03.575 13:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.575 13:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:03.575 13:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.575 13:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89052 00:24:03.575 13:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89052 00:24:04.507 { 00:24:04.507 "results": [ 00:24:04.507 { 00:24:04.507 "job": "NVMe0n1", 00:24:04.507 "core_mask": "0x1", 00:24:04.507 "workload": "verify", 00:24:04.507 "status": "finished", 00:24:04.507 "verify_range": { 00:24:04.507 "start": 0, 00:24:04.507 "length": 16384 00:24:04.507 }, 00:24:04.507 "queue_depth": 128, 00:24:04.507 "io_size": 4096, 00:24:04.507 "runtime": 1.011323, 00:24:04.507 "iops": 8771.678286759028, 00:24:04.507 "mibps": 34.264368307652454, 00:24:04.507 "io_failed": 0, 00:24:04.507 "io_timeout": 0, 00:24:04.507 "avg_latency_us": 14498.779282032363, 00:24:04.507 "min_latency_us": 1846.9236363636364, 00:24:04.507 "max_latency_us": 16205.265454545455 00:24:04.507 } 00:24:04.507 ], 00:24:04.507 "core_count": 1 00:24:04.507 } 00:24:04.507 13:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:04.507 [2024-12-06 13:18:03.626349] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:04.507 [2024-12-06 13:18:03.626452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88923 ] 00:24:04.507 [2024-12-06 13:18:03.774215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.507 [2024-12-06 13:18:03.823968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.507 [2024-12-06 13:18:06.384545] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:24:04.507 [2024-12-06 13:18:06.384698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.507 [2024-12-06 13:18:06.384737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.507 [2024-12-06 13:18:06.384770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.507 [2024-12-06 13:18:06.384795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.507 [2024-12-06 13:18:06.384820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.507 [2024-12-06 13:18:06.384843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.507 [2024-12-06 13:18:06.384867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.507 [2024-12-06 13:18:06.384892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.507 [2024-12-06 13:18:06.384918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:04.507 [2024-12-06 13:18:06.385014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:04.507 [2024-12-06 13:18:06.385068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d9f50 (9): Bad file descriptor 00:24:04.507 [2024-12-06 13:18:06.388645] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:04.507 Running I/O for 1 seconds... 00:24:04.507 8695.00 IOPS, 33.96 MiB/s 00:24:04.507 Latency(us) 00:24:04.507 [2024-12-06T13:18:11.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.507 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:04.507 Verification LBA range: start 0x0 length 0x4000 00:24:04.507 NVMe0n1 : 1.01 8771.68 34.26 0.00 0.00 14498.78 1846.92 16205.27 00:24:04.507 [2024-12-06T13:18:11.035Z] =================================================================================================================== 00:24:04.507 [2024-12-06T13:18:11.035Z] Total : 8771.68 34.26 0.00 0.00 14498.78 1846.92 16205.27 00:24:04.507 13:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.507 13:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:04.765 13:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.331 13:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:05.331 13:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:05.590 13:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.848 13:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88923 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88923 ']' 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88923 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88923 00:24:09.130 killing process with pid 88923 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88923' 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88923 00:24:09.130 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88923 00:24:09.388 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:09.388 13:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.646 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.646 rmmod nvme_tcp 00:24:09.903 rmmod nvme_fabrics 00:24:09.903 rmmod nvme_keyring 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 88581 ']' 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 88581 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88581 ']' 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88581 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:09.903 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88581 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.904 killing process with pid 88581 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88581' 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88581 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88581 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:09.904 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:24:10.161 00:24:10.161 real 0m33.154s 00:24:10.161 user 2m9.696s 00:24:10.161 sys 0m4.694s 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:10.161 ************************************ 00:24:10.161 END TEST nvmf_failover 00:24:10.161 ************************************ 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.161 13:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.162 13:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.162 13:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.422 ************************************ 00:24:10.422 START TEST nvmf_host_discovery 00:24:10.422 ************************************ 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.422 * Looking for test storage... 00:24:10.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.422 --rc genhtml_branch_coverage=1 00:24:10.422 --rc genhtml_function_coverage=1 00:24:10.422 --rc genhtml_legend=1 00:24:10.422 --rc geninfo_all_blocks=1 00:24:10.422 --rc geninfo_unexecuted_blocks=1 00:24:10.422 00:24:10.422 ' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.422 --rc genhtml_branch_coverage=1 00:24:10.422 --rc genhtml_function_coverage=1 00:24:10.422 --rc genhtml_legend=1 00:24:10.422 --rc geninfo_all_blocks=1 00:24:10.422 --rc geninfo_unexecuted_blocks=1 00:24:10.422 00:24:10.422 ' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.422 --rc genhtml_branch_coverage=1 00:24:10.422 --rc genhtml_function_coverage=1 00:24:10.422 --rc genhtml_legend=1 00:24:10.422 --rc geninfo_all_blocks=1 00:24:10.422 --rc geninfo_unexecuted_blocks=1 00:24:10.422 00:24:10.422 ' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.422 --rc genhtml_branch_coverage=1 00:24:10.422 --rc genhtml_function_coverage=1 00:24:10.422 --rc genhtml_legend=1 00:24:10.422 --rc geninfo_all_blocks=1 00:24:10.422 --rc geninfo_unexecuted_blocks=1 00:24:10.422 00:24:10.422 ' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.422 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.423 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:10.423 Cannot find device "nvmf_init_br" 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:10.423 Cannot find device "nvmf_init_br2" 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:10.423 Cannot find device "nvmf_tgt_br" 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:10.423 Cannot find device "nvmf_tgt_br2" 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:24:10.423 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:10.717 Cannot find device "nvmf_init_br" 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:10.717 Cannot find device "nvmf_init_br2" 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:10.717 Cannot find device "nvmf_tgt_br" 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:10.717 Cannot find device "nvmf_tgt_br2" 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:10.717 Cannot find device "nvmf_br" 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:24:10.717 13:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:10.717 Cannot find device "nvmf_init_if" 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:10.717 Cannot find device "nvmf_init_if2" 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:10.717 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:10.718 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:10.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:10.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:24:10.976 00:24:10.976 --- 10.0.0.3 ping statistics --- 00:24:10.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.976 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:10.976 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:10.976 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:24:10.976 00:24:10.976 --- 10.0.0.4 ping statistics --- 00:24:10.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.976 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:24:10.976 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:10.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:24:10.976 00:24:10.976 --- 10.0.0.1 ping statistics --- 00:24:10.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.976 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:10.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:24:10.977 00:24:10.977 --- 10.0.0.2 ping statistics --- 00:24:10.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.977 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=89412 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 89412 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89412 ']' 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.977 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.977 [2024-12-06 13:18:17.368599] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:10.977 [2024-12-06 13:18:17.368689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.236 [2024-12-06 13:18:17.519157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.236 [2024-12-06 13:18:17.556472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.236 [2024-12-06 13:18:17.556527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.236 [2024-12-06 13:18:17.556541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.236 [2024-12-06 13:18:17.556551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.236 [2024-12-06 13:18:17.556560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.236 [2024-12-06 13:18:17.556913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 [2024-12-06 13:18:17.686721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 [2024-12-06 13:18:17.698874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 null0 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 null1 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89447 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89447 /tmp/host.sock 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89447 ']' 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:11.236 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 13:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:11.495 [2024-12-06 13:18:17.790698] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:11.495 [2024-12-06 13:18:17.790796] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89447 ] 00:24:11.495 [2024-12-06 13:18:17.939306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.495 [2024-12-06 13:18:17.987432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.753 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.753 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:11.753 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:11.754 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.013 [2024-12-06 13:18:18.467026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:12.013 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:12.014 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.014 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.014 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.014 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.014 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.014 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.014 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.272 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:12.272 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:12.272 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:12.272 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.272 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.272 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.272 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:12.273 13:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:12.841 [2024-12-06 13:18:19.102410] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:12.841 [2024-12-06 13:18:19.102459] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:12.841 [2024-12-06 13:18:19.102502] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:12.841 [2024-12-06 13:18:19.188562] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:24:12.841 [2024-12-06 13:18:19.243122] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:24:12.841 [2024-12-06 13:18:19.244112] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6aee50:1 started. 00:24:12.841 [2024-12-06 13:18:19.246072] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:12.841 [2024-12-06 13:18:19.246102] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:12.841 [2024-12-06 13:18:19.251073] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6aee50 was disconnected and freed. delete nvme_qpair. 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.408 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.409 [2024-12-06 13:18:19.924785] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6af400:1 started. 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.409 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.409 [2024-12-06 13:18:19.931428] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6af400 was disconnected and freed. delete nvme_qpair. 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.668 13:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.668 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:13.668 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:13.668 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:13.668 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.668 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:24:13.668 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.668 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.668 [2024-12-06 13:18:20.027786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:13.668 [2024-12-06 13:18:20.028502] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:13.669 [2024-12-06 13:18:20.028543] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.669 [2024-12-06 13:18:20.115034] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:13.669 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.669 [2024-12-06 13:18:20.175519] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:24:13.669 [2024-12-06 13:18:20.175586] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:13.669 [2024-12-06 13:18:20.175599] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:13.669 [2024-12-06 13:18:20.175605] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:13.928 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:13.928 13:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.864 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.865 [2024-12-06 13:18:21.310180] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:14.865 [2024-12-06 13:18:21.310360] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:14.865 [2024-12-06 13:18:21.313339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.865 [2024-12-06 13:18:21.313382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-12-06 13:18:21.313398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.865 [2024-12-06 13:18:21.313408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-12-06 13:18:21.313418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.865 [2024-12-06 13:18:21.313427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-12-06 13:18:21.313438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.865 [2024-12-06 13:18:21.313447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-12-06 13:18:21.313456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.865 [2024-12-06 13:18:21.323294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:14.865 [2024-12-06 13:18:21.333311] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.865 [2024-12-06 13:18:21.333337] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.865 [2024-12-06 13:18:21.333344] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.333350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.865 [2024-12-06 13:18:21.333388] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.333501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.865 [2024-12-06 13:18:21.333527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68c0b0 with addr=10.0.0.3, port=4420 00:24:14.865 [2024-12-06 13:18:21.333539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:14.865 [2024-12-06 13:18:21.333558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:14.865 [2024-12-06 13:18:21.333574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.865 [2024-12-06 13:18:21.333584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.865 [2024-12-06 13:18:21.333595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.865 [2024-12-06 13:18:21.333604] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.865 [2024-12-06 13:18:21.333611] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.865 [2024-12-06 13:18:21.333616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.865 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.865 [2024-12-06 13:18:21.343410] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.865 [2024-12-06 13:18:21.343465] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.865 [2024-12-06 13:18:21.343474] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.343480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.865 [2024-12-06 13:18:21.343529] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.343645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.865 [2024-12-06 13:18:21.343671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68c0b0 with addr=10.0.0.3, port=4420 00:24:14.865 [2024-12-06 13:18:21.343685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:14.865 [2024-12-06 13:18:21.343704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:14.865 [2024-12-06 13:18:21.343721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.865 [2024-12-06 13:18:21.343730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.865 [2024-12-06 13:18:21.343741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.865 [2024-12-06 13:18:21.343751] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.865 [2024-12-06 13:18:21.343758] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.865 [2024-12-06 13:18:21.343766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.865 [2024-12-06 13:18:21.353554] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.865 [2024-12-06 13:18:21.353601] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.865 [2024-12-06 13:18:21.353610] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.353619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.865 [2024-12-06 13:18:21.353670] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.353782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.865 [2024-12-06 13:18:21.353813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68c0b0 with addr=10.0.0.3, port=4420 00:24:14.865 [2024-12-06 13:18:21.353829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:14.865 [2024-12-06 13:18:21.353853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:14.865 [2024-12-06 13:18:21.353873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.865 [2024-12-06 13:18:21.353885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.865 [2024-12-06 13:18:21.353900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.865 [2024-12-06 13:18:21.353913] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.865 [2024-12-06 13:18:21.353925] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.865 [2024-12-06 13:18:21.353934] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.865 [2024-12-06 13:18:21.363690] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.865 [2024-12-06 13:18:21.363735] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.865 [2024-12-06 13:18:21.363743] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.363749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.865 [2024-12-06 13:18:21.363789] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.865 [2024-12-06 13:18:21.363877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.865 [2024-12-06 13:18:21.363901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68c0b0 with addr=10.0.0.3, port=4420 00:24:14.865 [2024-12-06 13:18:21.363914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:14.865 [2024-12-06 13:18:21.363934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:14.865 [2024-12-06 13:18:21.363968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.865 [2024-12-06 13:18:21.363979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.865 [2024-12-06 13:18:21.363990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.865 [2024-12-06 13:18:21.363999] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.865 [2024-12-06 13:18:21.364005] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.865 [2024-12-06 13:18:21.364010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.865 [2024-12-06 13:18:21.373802] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.866 [2024-12-06 13:18:21.373839] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.866 [2024-12-06 13:18:21.373846] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.866 [2024-12-06 13:18:21.373852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.866 [2024-12-06 13:18:21.373888] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.866 [2024-12-06 13:18:21.373988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.866 [2024-12-06 13:18:21.374012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68c0b0 with addr=10.0.0.3, port=4420 00:24:14.866 [2024-12-06 13:18:21.374025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:14.866 [2024-12-06 13:18:21.374043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:14.866 [2024-12-06 13:18:21.374058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.866 [2024-12-06 13:18:21.374067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.866 [2024-12-06 13:18:21.374077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.866 [2024-12-06 13:18:21.374086] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.866 [2024-12-06 13:18:21.374093] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.866 [2024-12-06 13:18:21.374098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.866 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.866 [2024-12-06 13:18:21.383912] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.866 [2024-12-06 13:18:21.383964] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.866 [2024-12-06 13:18:21.383973] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.866 [2024-12-06 13:18:21.383980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.866 [2024-12-06 13:18:21.384022] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.866 [2024-12-06 13:18:21.384125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.866 [2024-12-06 13:18:21.384151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68c0b0 with addr=10.0.0.3, port=4420 00:24:14.866 [2024-12-06 13:18:21.384165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:14.866 [2024-12-06 13:18:21.384185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:14.866 [2024-12-06 13:18:21.384201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.866 [2024-12-06 13:18:21.384211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.866 [2024-12-06 13:18:21.384222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.866 [2024-12-06 13:18:21.384231] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.866 [2024-12-06 13:18:21.384238] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.866 [2024-12-06 13:18:21.384243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:15.126 [2024-12-06 13:18:21.394033] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:15.126 [2024-12-06 13:18:21.394065] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:15.126 [2024-12-06 13:18:21.394072] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:15.126 [2024-12-06 13:18:21.394079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:15.126 [2024-12-06 13:18:21.394109] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:15.126 [2024-12-06 13:18:21.394174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.126 [2024-12-06 13:18:21.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68c0b0 with addr=10.0.0.3, port=4420 00:24:15.126 [2024-12-06 13:18:21.394208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c0b0 is same with the state(6) to be set 00:24:15.126 [2024-12-06 13:18:21.394225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68c0b0 (9): Bad file descriptor 00:24:15.126 [2024-12-06 13:18:21.394249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:15.126 [2024-12-06 13:18:21.394260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:15.126 [2024-12-06 13:18:21.394270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:15.126 [2024-12-06 13:18:21.394278] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:15.126 [2024-12-06 13:18:21.394284] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:15.126 [2024-12-06 13:18:21.394289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:15.126 [2024-12-06 13:18:21.396265] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:24:15.126 [2024-12-06 13:18:21.396299] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:15.126 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:15.127 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.385 13:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.320 [2024-12-06 13:18:22.733151] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:16.320 [2024-12-06 13:18:22.733194] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:16.320 [2024-12-06 13:18:22.733216] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:16.320 [2024-12-06 13:18:22.819254] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:24:16.579 [2024-12-06 13:18:22.877712] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:24:16.579 [2024-12-06 13:18:22.878396] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x629b90:1 started. 00:24:16.579 [2024-12-06 13:18:22.880446] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:16.579 [2024-12-06 13:18:22.880491] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.579 [2024-12-06 13:18:22.882111] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x629b90 was disconnected and freed. delete nvme_qpair. 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.579 2024/12/06 13:18:22 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:16.579 request: 00:24:16.579 { 00:24:16.579 "method": "bdev_nvme_start_discovery", 00:24:16.579 "params": { 00:24:16.579 "name": "nvme", 00:24:16.579 "trtype": "tcp", 00:24:16.579 "traddr": "10.0.0.3", 00:24:16.579 "adrfam": "ipv4", 00:24:16.579 "trsvcid": "8009", 00:24:16.579 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:16.579 "wait_for_attach": true 00:24:16.579 } 00:24:16.579 } 00:24:16.579 Got JSON-RPC error response 00:24:16.579 GoRPCClient: error on JSON-RPC call 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:16.579 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:16.580 13:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 2024/12/06 13:18:23 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:16.580 request: 00:24:16.580 { 00:24:16.580 "method": "bdev_nvme_start_discovery", 00:24:16.580 "params": { 00:24:16.580 "name": "nvme_second", 00:24:16.580 "trtype": "tcp", 00:24:16.580 "traddr": "10.0.0.3", 00:24:16.580 "adrfam": "ipv4", 00:24:16.580 "trsvcid": "8009", 00:24:16.580 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:16.580 "wait_for_attach": true 00:24:16.580 } 00:24:16.580 } 00:24:16.580 Got JSON-RPC error response 00:24:16.580 GoRPCClient: error on JSON-RPC call 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.580 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.838 13:18:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.773 [2024-12-06 13:18:24.144847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.773 [2024-12-06 13:18:24.144926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6aec30 with addr=10.0.0.3, port=8010 00:24:17.773 [2024-12-06 13:18:24.144960] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:17.773 [2024-12-06 13:18:24.144973] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:17.773 [2024-12-06 13:18:24.144984] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:24:18.707 [2024-12-06 13:18:25.144861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.707 [2024-12-06 13:18:25.144981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6aec30 with addr=10.0.0.3, port=8010 00:24:18.707 [2024-12-06 13:18:25.145006] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:18.707 [2024-12-06 13:18:25.145018] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:18.707 [2024-12-06 13:18:25.145028] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:24:19.642 [2024-12-06 13:18:26.144671] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:24:19.642 2024/12/06 13:18:26 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:24:19.642 request: 00:24:19.642 { 00:24:19.642 "method": "bdev_nvme_start_discovery", 00:24:19.642 "params": { 00:24:19.642 "name": "nvme_second", 00:24:19.642 "trtype": "tcp", 00:24:19.642 "traddr": "10.0.0.3", 00:24:19.642 "adrfam": "ipv4", 00:24:19.642 "trsvcid": "8010", 00:24:19.642 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:19.642 "wait_for_attach": false, 00:24:19.642 "attach_timeout_ms": 3000 00:24:19.642 } 00:24:19.642 } 00:24:19.642 Got JSON-RPC error response 00:24:19.642 GoRPCClient: error on JSON-RPC call 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:19.642 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.643 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89447 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:19.900 rmmod nvme_tcp 00:24:19.900 rmmod nvme_fabrics 00:24:19.900 rmmod nvme_keyring 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 89412 ']' 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 89412 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 89412 ']' 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 89412 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89412 00:24:19.900 killing process with pid 89412 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89412' 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 89412 00:24:19.900 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 89412 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:20.157 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:24:20.415 00:24:20.415 real 0m10.076s 00:24:20.415 user 0m19.597s 00:24:20.415 sys 0m1.561s 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.415 ************************************ 00:24:20.415 END TEST nvmf_host_discovery 00:24:20.415 ************************************ 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.415 ************************************ 00:24:20.415 START TEST nvmf_host_multipath_status 00:24:20.415 ************************************ 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:20.415 * Looking for test storage... 00:24:20.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:20.415 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:20.673 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:20.673 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.673 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.673 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:20.674 13:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:20.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.674 --rc genhtml_branch_coverage=1 00:24:20.674 --rc genhtml_function_coverage=1 00:24:20.674 --rc genhtml_legend=1 00:24:20.674 --rc geninfo_all_blocks=1 00:24:20.674 --rc geninfo_unexecuted_blocks=1 00:24:20.674 00:24:20.674 ' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:20.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.674 --rc genhtml_branch_coverage=1 00:24:20.674 --rc genhtml_function_coverage=1 00:24:20.674 --rc genhtml_legend=1 00:24:20.674 --rc geninfo_all_blocks=1 00:24:20.674 --rc geninfo_unexecuted_blocks=1 00:24:20.674 00:24:20.674 ' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:20.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.674 --rc genhtml_branch_coverage=1 00:24:20.674 --rc genhtml_function_coverage=1 00:24:20.674 --rc genhtml_legend=1 00:24:20.674 --rc geninfo_all_blocks=1 00:24:20.674 --rc geninfo_unexecuted_blocks=1 00:24:20.674 00:24:20.674 ' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:20.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.674 --rc genhtml_branch_coverage=1 00:24:20.674 --rc genhtml_function_coverage=1 00:24:20.674 --rc genhtml_legend=1 00:24:20.674 --rc geninfo_all_blocks=1 00:24:20.674 --rc geninfo_unexecuted_blocks=1 00:24:20.674 00:24:20.674 ' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.674 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.674 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:20.675 Cannot find device "nvmf_init_br" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:20.675 Cannot find device "nvmf_init_br2" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:20.675 Cannot find device "nvmf_tgt_br" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:20.675 Cannot find device "nvmf_tgt_br2" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:20.675 Cannot find device "nvmf_init_br" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:20.675 Cannot find device "nvmf_init_br2" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:20.675 Cannot find device "nvmf_tgt_br" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:20.675 Cannot find device "nvmf_tgt_br2" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:20.675 Cannot find device "nvmf_br" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:20.675 Cannot find device "nvmf_init_if" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:20.675 Cannot find device "nvmf_init_if2" 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:20.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:20.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:20.675 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:20.933 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:20.934 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:20.934 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:24:20.934 00:24:20.934 --- 10.0.0.3 ping statistics --- 00:24:20.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.934 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:20.934 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:20.934 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:24:20.934 00:24:20.934 --- 10.0.0.4 ping statistics --- 00:24:20.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.934 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:20.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:20.934 00:24:20.934 --- 10.0.0.1 ping statistics --- 00:24:20.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.934 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:20.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:24:20.934 00:24:20.934 --- 10.0.0.2 ping statistics --- 00:24:20.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.934 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=89966 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 89966 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89966 ']' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.934 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:21.192 [2024-12-06 13:18:27.520690] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:21.192 [2024-12-06 13:18:27.521016] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.192 [2024-12-06 13:18:27.671936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:21.192 [2024-12-06 13:18:27.710137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.192 [2024-12-06 13:18:27.710428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.192 [2024-12-06 13:18:27.710650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.192 [2024-12-06 13:18:27.710768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.192 [2024-12-06 13:18:27.710782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.192 [2024-12-06 13:18:27.711743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.192 [2024-12-06 13:18:27.711758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89966 00:24:21.450 13:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:21.708 [2024-12-06 13:18:28.156240] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.708 13:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:22.272 Malloc0 00:24:22.272 13:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:22.272 13:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.837 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:23.095 [2024-12-06 13:18:29.365296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:23.095 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:23.353 [2024-12-06 13:18:29.633431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90052 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90052 /var/tmp/bdevperf.sock 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90052 ']' 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.353 13:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:23.610 13:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.610 13:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:23.610 13:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:23.868 13:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:24.437 Nvme0n1 00:24:24.437 13:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:24.695 Nvme0n1 00:24:24.695 13:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:24.695 13:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:27.227 13:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:27.227 13:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:24:27.227 13:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:27.486 13:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:28.422 13:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:28.422 13:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:28.422 13:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.422 13:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:28.987 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.987 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:28.987 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.987 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.244 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.244 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.244 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.244 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.502 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.502 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.502 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.502 13:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.759 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.759 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:29.759 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.759 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.326 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.326 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.326 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.326 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.666 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.666 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:30.666 13:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:30.925 13:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:31.183 13:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:32.119 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:32.119 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:32.119 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.119 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.377 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.377 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.377 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.377 13:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.943 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.943 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.943 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.943 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.200 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.200 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:33.200 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.200 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.458 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.458 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:33.458 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.458 13:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.717 13:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.717 13:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.717 13:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.717 13:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.284 13:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.284 13:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:34.284 13:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:34.851 13:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:24:35.109 13:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:36.043 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:36.043 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.043 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.043 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.608 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.608 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:36.608 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.608 13:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.865 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.865 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.865 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.865 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.428 13:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.990 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.990 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:37.990 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.991 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.991 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.991 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:37.991 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:38.554 13:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:24:38.811 13:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:39.742 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:39.742 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:39.742 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.742 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.998 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.998 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:39.998 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.998 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.562 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.562 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.562 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.562 13:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.819 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.819 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.819 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.819 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.077 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.077 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.077 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.077 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.334 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.334 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:41.334 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.334 13:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.593 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.593 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:41.593 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:42.238 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:24:42.238 13:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:43.192 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:43.192 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.192 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.192 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.450 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.450 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:43.450 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.450 13:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.016 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.582 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.582 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:44.582 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.582 13:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.840 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.840 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:44.840 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.840 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.098 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.098 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:45.098 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:45.355 13:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:45.921 13:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:46.855 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:46.855 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:46.855 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.855 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.113 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.114 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:47.114 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.114 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.372 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.372 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.372 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.372 13:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.937 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.937 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.937 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.937 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.195 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.195 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:48.195 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.195 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:48.453 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.453 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:48.453 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.453 13:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:48.819 13:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.819 13:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:49.077 13:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:49.077 13:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:24:49.641 13:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:49.641 13:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:51.015 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:51.015 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:51.015 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.015 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.273 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.273 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:51.273 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.273 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.531 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.531 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.531 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.531 13:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.789 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.789 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.789 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.789 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.047 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.047 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.047 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.047 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.613 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.613 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.613 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.613 13:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.179 13:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.179 13:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:53.179 13:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:53.437 13:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:53.695 13:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:54.660 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:54.660 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:54.660 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.660 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.917 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.917 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:54.917 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.917 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.174 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.174 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.174 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.174 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.431 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.431 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.431 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.431 13:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.687 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.687 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.944 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.944 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:56.201 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.201 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:56.201 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.201 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:56.458 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.458 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:56.458 13:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:56.715 13:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:24:56.972 13:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:57.902 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:57.902 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.902 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.902 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:58.160 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.160 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:58.160 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.160 13:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.726 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.726 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.726 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.726 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.983 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.983 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.983 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.983 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.242 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.242 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.242 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.242 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.501 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.501 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:59.501 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.501 13:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.760 13:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.760 13:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:59.760 13:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:00.019 13:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:00.278 13:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:01.259 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:01.259 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:01.518 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.518 13:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.777 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.777 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:01.777 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.777 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.036 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.036 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.036 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.036 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.295 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.295 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.295 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.295 13:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.864 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90052 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90052 ']' 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90052 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90052 00:25:03.454 killing process with pid 90052 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90052' 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90052 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90052 00:25:03.454 { 00:25:03.454 "results": [ 00:25:03.454 { 00:25:03.454 "job": "Nvme0n1", 00:25:03.454 "core_mask": "0x4", 00:25:03.454 "workload": "verify", 00:25:03.454 "status": "terminated", 00:25:03.454 "verify_range": { 00:25:03.454 "start": 0, 00:25:03.454 "length": 16384 00:25:03.454 }, 00:25:03.454 "queue_depth": 128, 00:25:03.454 "io_size": 4096, 00:25:03.454 "runtime": 38.389303, 00:25:03.454 "iops": 8304.579012544198, 00:25:03.454 "mibps": 32.439761767750774, 00:25:03.454 "io_failed": 0, 00:25:03.454 "io_timeout": 0, 00:25:03.454 "avg_latency_us": 15381.063447893952, 00:25:03.454 "min_latency_us": 134.98181818181817, 00:25:03.454 "max_latency_us": 4087539.898181818 00:25:03.454 } 00:25:03.454 ], 00:25:03.454 "core_count": 1 00:25:03.454 } 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90052 00:25:03.454 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:03.454 [2024-12-06 13:18:29.725876] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:03.454 [2024-12-06 13:18:29.726055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90052 ] 00:25:03.454 [2024-12-06 13:18:29.886541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.454 [2024-12-06 13:18:29.924852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.454 Running I/O for 90 seconds... 00:25:03.454 8811.00 IOPS, 34.42 MiB/s [2024-12-06T13:19:09.982Z] 8764.50 IOPS, 34.24 MiB/s [2024-12-06T13:19:09.982Z] 8768.00 IOPS, 34.25 MiB/s [2024-12-06T13:19:09.982Z] 8688.50 IOPS, 33.94 MiB/s [2024-12-06T13:19:09.982Z] 8681.60 IOPS, 33.91 MiB/s [2024-12-06T13:19:09.982Z] 8670.67 IOPS, 33.87 MiB/s [2024-12-06T13:19:09.982Z] 8695.71 IOPS, 33.97 MiB/s [2024-12-06T13:19:09.982Z] 8696.62 IOPS, 33.97 MiB/s [2024-12-06T13:19:09.982Z] 8694.33 IOPS, 33.96 MiB/s [2024-12-06T13:19:09.982Z] 8669.10 IOPS, 33.86 MiB/s [2024-12-06T13:19:09.982Z] 8677.18 IOPS, 33.90 MiB/s [2024-12-06T13:19:09.982Z] 8649.42 IOPS, 33.79 MiB/s [2024-12-06T13:19:09.982Z] 8638.77 IOPS, 33.75 MiB/s [2024-12-06T13:19:09.982Z] 8635.86 IOPS, 33.73 MiB/s [2024-12-06T13:19:09.982Z] 8647.33 IOPS, 33.78 MiB/s [2024-12-06T13:19:09.982Z] 8642.75 IOPS, 33.76 MiB/s [2024-12-06T13:19:09.982Z] 8663.00 IOPS, 33.84 MiB/s [2024-12-06T13:19:09.982Z] [2024-12-06 13:18:48.350438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.454 [2024-12-06 13:18:48.350502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.350971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.350990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.351012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.351028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.351050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.351067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.351089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.454 [2024-12-06 13:18:48.351105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.454 [2024-12-06 13:18:48.351127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.351855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.351880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.352973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.352996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.353013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.353035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.353051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.353073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.353090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.353112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.353128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.353150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.455 [2024-12-06 13:18:48.353166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.455 [2024-12-06 13:18:48.353189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.353590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.353606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.354768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.354800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.354826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.354843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.354865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.354881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.354903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.354919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.354956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.354976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.456 [2024-12-06 13:18:48.355865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.456 [2024-12-06 13:18:48.355902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.456 [2024-12-06 13:18:48.355925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.457 [2024-12-06 13:18:48.355954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.355991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.457 [2024-12-06 13:18:48.356021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.356060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.457 [2024-12-06 13:18:48.356079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.356102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.457 [2024-12-06 13:18:48.356118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.457 [2024-12-06 13:18:48.357196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.457 [2024-12-06 13:18:48.357243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.357959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.357986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.358933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.457 [2024-12-06 13:18:48.358975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.457 [2024-12-06 13:18:48.359015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.359957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.359995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.458 [2024-12-06 13:18:48.360349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.458 [2024-12-06 13:18:48.360371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.360973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.360990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.361524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.361541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.459 [2024-12-06 13:18:48.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.459 [2024-12-06 13:18:48.362775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.362791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.362812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.362828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.362867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.362884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.362906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.362921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.362958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.362977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.460 [2024-12-06 13:18:48.363457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.460 [2024-12-06 13:18:48.363494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.460 [2024-12-06 13:18:48.363531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.460 [2024-12-06 13:18:48.363568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.460 [2024-12-06 13:18:48.363605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.460 [2024-12-06 13:18:48.363642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.460 [2024-12-06 13:18:48.363680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.363938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.363980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.460 [2024-12-06 13:18:48.364350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.460 [2024-12-06 13:18:48.364383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.364927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.364957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.365758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.365786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.365803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.365826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.365842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.365864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.365880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.365902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.365917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.365953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.365973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.365996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.461 [2024-12-06 13:18:48.366647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.461 [2024-12-06 13:18:48.366671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.366689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.366710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.366726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.366748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.366776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.366814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.366843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.366868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.366897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.366934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.366968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.366992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.367607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.367636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.382920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.382970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.383001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.462 [2024-12-06 13:18:48.383039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.462 [2024-12-06 13:18:48.383062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.383092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.383113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.383164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.383191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.383222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.383243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.383275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.383297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.384579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.384645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.384698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.384750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.384802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.384854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.384906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.384973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.463 [2024-12-06 13:18:48.385799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.463 [2024-12-06 13:18:48.385829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.385851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.385881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.385903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.385934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.385977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.464 [2024-12-06 13:18:48.386033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.464 [2024-12-06 13:18:48.386085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.464 [2024-12-06 13:18:48.386137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.464 [2024-12-06 13:18:48.386189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.464 [2024-12-06 13:18:48.386241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.464 [2024-12-06 13:18:48.386304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.464 [2024-12-06 13:18:48.386359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.386928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.386983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.464 [2024-12-06 13:18:48.387927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.464 [2024-12-06 13:18:48.387978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.388003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.389976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.465 [2024-12-06 13:18:48.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.465 [2024-12-06 13:18:48.390759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.390789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.390810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.390840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.390862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.390892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.390914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.390959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.390984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.391969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.391987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.392009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.392025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.392047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.392063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.392085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.392101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.392122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.392138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.392160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.392176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.392968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.392996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.393023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.393041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.393064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.393080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.393115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.466 [2024-12-06 13:18:48.393132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.466 [2024-12-06 13:18:48.393154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.393959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.393978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.467 [2024-12-06 13:18:48.394062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.467 [2024-12-06 13:18:48.394102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.467 [2024-12-06 13:18:48.394139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.467 [2024-12-06 13:18:48.394176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.467 [2024-12-06 13:18:48.394214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.467 [2024-12-06 13:18:48.394251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.467 [2024-12-06 13:18:48.394288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.467 [2024-12-06 13:18:48.394687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.467 [2024-12-06 13:18:48.394710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.394726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.394753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.394784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.394814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.394831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.394852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.394868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.394894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.394924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.394984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.395813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.395839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.396860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.396896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.396926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.396991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.397030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.397060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.397100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.468 [2024-12-06 13:18:48.397131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.468 [2024-12-06 13:18:48.397168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.397965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.397985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.469 [2024-12-06 13:18:48.398873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.469 [2024-12-06 13:18:48.398889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.398910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.398926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.398963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.398982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.399711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.399728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.400957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.400982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.401011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.401035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.401051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.401072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.401088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.401109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.401125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.401150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.470 [2024-12-06 13:18:48.401166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.470 [2024-12-06 13:18:48.401187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.471 [2024-12-06 13:18:48.401660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.471 [2024-12-06 13:18:48.401698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.471 [2024-12-06 13:18:48.401736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.471 [2024-12-06 13:18:48.401774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.471 [2024-12-06 13:18:48.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.471 [2024-12-06 13:18:48.401848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.471 [2024-12-06 13:18:48.401885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.401969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.471 [2024-12-06 13:18:48.402651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.471 [2024-12-06 13:18:48.402667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.402972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.402989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.403736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.403765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.403793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.403810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.403831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.403847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.403869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.403885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.403907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.403923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.403959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.403978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.472 [2024-12-06 13:18:48.404713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.472 [2024-12-06 13:18:48.404735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.404751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.404773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.404788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.404810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.404825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.404847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.404862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.404884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.404899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.404921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.404936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.404971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.404988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.405983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.405998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.406020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.406036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.406066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.406083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.406105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.406121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.406143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.406158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.406180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.406196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.406218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.473 [2024-12-06 13:18:48.406236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.473 [2024-12-06 13:18:48.406519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.406975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.406993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.407831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.474 [2024-12-06 13:18:48.407895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.474 [2024-12-06 13:18:48.407938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.407980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.474 [2024-12-06 13:18:48.407997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.408034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.474 [2024-12-06 13:18:48.408051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.408078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.474 [2024-12-06 13:18:48.408094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.408120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.474 [2024-12-06 13:18:48.408136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.408162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.474 [2024-12-06 13:18:48.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.408203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.408219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.408245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.408260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.474 [2024-12-06 13:18:48.408286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.474 [2024-12-06 13:18:48.408302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.408977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.408994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:18:48.409542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:18:48.409564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.475 8195.94 IOPS, 32.02 MiB/s [2024-12-06T13:19:10.003Z] 7764.58 IOPS, 30.33 MiB/s [2024-12-06T13:19:10.003Z] 7376.35 IOPS, 28.81 MiB/s [2024-12-06T13:19:10.003Z] 7025.10 IOPS, 27.44 MiB/s [2024-12-06T13:19:10.003Z] 7065.50 IOPS, 27.60 MiB/s [2024-12-06T13:19:10.003Z] 7122.52 IOPS, 27.82 MiB/s [2024-12-06T13:19:10.003Z] 7190.00 IOPS, 28.09 MiB/s [2024-12-06T13:19:10.003Z] 7301.40 IOPS, 28.52 MiB/s [2024-12-06T13:19:10.003Z] 7446.27 IOPS, 29.09 MiB/s [2024-12-06T13:19:10.003Z] 7596.33 IOPS, 29.67 MiB/s [2024-12-06T13:19:10.003Z] 7739.29 IOPS, 30.23 MiB/s [2024-12-06T13:19:10.003Z] 7807.97 IOPS, 30.50 MiB/s [2024-12-06T13:19:10.003Z] 7845.90 IOPS, 30.65 MiB/s [2024-12-06T13:19:10.003Z] 7885.90 IOPS, 30.80 MiB/s [2024-12-06T13:19:10.003Z] 7921.75 IOPS, 30.94 MiB/s [2024-12-06T13:19:10.003Z] 8029.30 IOPS, 31.36 MiB/s [2024-12-06T13:19:10.003Z] 8133.21 IOPS, 31.77 MiB/s [2024-12-06T13:19:10.003Z] 8226.69 IOPS, 32.14 MiB/s [2024-12-06T13:19:10.003Z] [2024-12-06 13:19:06.759412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.475 [2024-12-06 13:19:06.759483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:19:06.760215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:19:06.760270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:19:06.760340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:19:06.760378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.475 [2024-12-06 13:19:06.760415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.475 [2024-12-06 13:19:06.760452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.475 [2024-12-06 13:19:06.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.475 [2024-12-06 13:19:06.760525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.475 [2024-12-06 13:19:06.760561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.475 [2024-12-06 13:19:06.760582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.475 [2024-12-06 13:19:06.760597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.760634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.760670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.760706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.760743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.760795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.760833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.760871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.760908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.760961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.760986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.761001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.761039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.761076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.761421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.761458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.761480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.761496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.762397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.762443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.762480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.762643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.762681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.476 [2024-12-06 13:19:06.762835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.476 [2024-12-06 13:19:06.762961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.476 [2024-12-06 13:19:06.762985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.477 [2024-12-06 13:19:06.763001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.477 [2024-12-06 13:19:06.763023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.477 [2024-12-06 13:19:06.763038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.477 [2024-12-06 13:19:06.763060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.477 [2024-12-06 13:19:06.763076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.477 [2024-12-06 13:19:06.763107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.477 [2024-12-06 13:19:06.763123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.477 [2024-12-06 13:19:06.763158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.477 [2024-12-06 13:19:06.763187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.477 8271.28 IOPS, 32.31 MiB/s [2024-12-06T13:19:10.005Z] 8283.57 IOPS, 32.36 MiB/s [2024-12-06T13:19:10.005Z] 8301.50 IOPS, 32.43 MiB/s [2024-12-06T13:19:10.005Z] Received shutdown signal, test time was about 38.390147 seconds 00:25:03.477 00:25:03.477 Latency(us) 00:25:03.477 [2024-12-06T13:19:10.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.477 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:03.477 Verification LBA range: start 0x0 length 0x4000 00:25:03.477 Nvme0n1 : 38.39 8304.58 32.44 0.00 0.00 15381.06 134.98 4087539.90 00:25:03.477 [2024-12-06T13:19:10.005Z] =================================================================================================================== 00:25:03.477 [2024-12-06T13:19:10.005Z] Total : 8304.58 32.44 0.00 0.00 15381.06 134.98 4087539.90 00:25:03.477 13:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.736 rmmod nvme_tcp 00:25:03.736 rmmod nvme_fabrics 00:25:03.736 rmmod nvme_keyring 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 89966 ']' 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 89966 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89966 ']' 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89966 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.736 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89966 00:25:03.995 killing process with pid 89966 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89966' 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89966 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89966 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:03.995 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:25:04.255 00:25:04.255 real 0m43.866s 00:25:04.255 user 2m25.869s 00:25:04.255 sys 0m10.476s 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:04.255 ************************************ 00:25:04.255 END TEST nvmf_host_multipath_status 00:25:04.255 ************************************ 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.255 ************************************ 00:25:04.255 START TEST nvmf_discovery_remove_ifc 00:25:04.255 ************************************ 00:25:04.255 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:04.516 * Looking for test storage... 00:25:04.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:04.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.516 --rc genhtml_branch_coverage=1 00:25:04.516 --rc genhtml_function_coverage=1 00:25:04.516 --rc genhtml_legend=1 00:25:04.516 --rc geninfo_all_blocks=1 00:25:04.516 --rc geninfo_unexecuted_blocks=1 00:25:04.516 00:25:04.516 ' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:04.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.516 --rc genhtml_branch_coverage=1 00:25:04.516 --rc genhtml_function_coverage=1 00:25:04.516 --rc genhtml_legend=1 00:25:04.516 --rc geninfo_all_blocks=1 00:25:04.516 --rc geninfo_unexecuted_blocks=1 00:25:04.516 00:25:04.516 ' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:04.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.516 --rc genhtml_branch_coverage=1 00:25:04.516 --rc genhtml_function_coverage=1 00:25:04.516 --rc genhtml_legend=1 00:25:04.516 --rc geninfo_all_blocks=1 00:25:04.516 --rc geninfo_unexecuted_blocks=1 00:25:04.516 00:25:04.516 ' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:04.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.516 --rc genhtml_branch_coverage=1 00:25:04.516 --rc genhtml_function_coverage=1 00:25:04.516 --rc genhtml_legend=1 00:25:04.516 --rc geninfo_all_blocks=1 00:25:04.516 --rc geninfo_unexecuted_blocks=1 00:25:04.516 00:25:04.516 ' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.517 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:04.517 Cannot find device "nvmf_init_br" 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:04.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:04.517 Cannot find device "nvmf_init_br2" 00:25:04.517 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:04.517 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:04.517 Cannot find device "nvmf_tgt_br" 00:25:04.517 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:25:04.517 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:04.517 Cannot find device "nvmf_tgt_br2" 00:25:04.517 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:25:04.517 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:04.777 Cannot find device "nvmf_init_br" 00:25:04.777 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:25:04.777 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:04.777 Cannot find device "nvmf_init_br2" 00:25:04.777 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:25:04.777 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:04.777 Cannot find device "nvmf_tgt_br" 00:25:04.777 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:25:04.777 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:04.777 Cannot find device "nvmf_tgt_br2" 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:04.778 Cannot find device "nvmf_br" 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:04.778 Cannot find device "nvmf_init_if" 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:04.778 Cannot find device "nvmf_init_if2" 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:04.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:04.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:04.778 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:05.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:05.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:25:05.038 00:25:05.038 --- 10.0.0.3 ping statistics --- 00:25:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.038 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:05.038 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:05.038 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:25:05.038 00:25:05.038 --- 10.0.0.4 ping statistics --- 00:25:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.038 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:05.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:05.038 00:25:05.038 --- 10.0.0.1 ping statistics --- 00:25:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.038 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:05.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:25:05.038 00:25:05.038 --- 10.0.0.2 ping statistics --- 00:25:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.038 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91445 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91445 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91445 ']' 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.038 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.038 [2024-12-06 13:19:11.514686] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:05.038 [2024-12-06 13:19:11.514799] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.297 [2024-12-06 13:19:11.669174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.297 [2024-12-06 13:19:11.708366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.297 [2024-12-06 13:19:11.708450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.297 [2024-12-06 13:19:11.708475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.297 [2024-12-06 13:19:11.708485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.297 [2024-12-06 13:19:11.708494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.297 [2024-12-06 13:19:11.708876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.297 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.297 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:05.297 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.297 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.297 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.557 [2024-12-06 13:19:11.854908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.557 [2024-12-06 13:19:11.863088] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:25:05.557 null0 00:25:05.557 [2024-12-06 13:19:11.894989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91476 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91476 /tmp/host.sock 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91476 ']' 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:05.557 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.557 13:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.557 [2024-12-06 13:19:11.981428] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:05.557 [2024-12-06 13:19:11.981535] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91476 ] 00:25:05.817 [2024-12-06 13:19:12.131767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.817 [2024-12-06 13:19:12.165440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.817 13:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.193 [2024-12-06 13:19:13.348354] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:07.193 [2024-12-06 13:19:13.348397] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:07.193 [2024-12-06 13:19:13.348418] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:07.193 [2024-12-06 13:19:13.434613] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:25:07.193 [2024-12-06 13:19:13.489062] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:25:07.193 [2024-12-06 13:19:13.489902] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x136f1e0:1 started. 00:25:07.193 [2024-12-06 13:19:13.491600] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:07.193 [2024-12-06 13:19:13.491664] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:07.193 [2024-12-06 13:19:13.491693] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:07.193 [2024-12-06 13:19:13.491724] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:25:07.193 [2024-12-06 13:19:13.491754] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:07.193 [2024-12-06 13:19:13.496698] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x136f1e0 was disconnected and freed. delete nvme_qpair. 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:07.193 13:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:08.154 13:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:09.528 13:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.465 13:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.402 13:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.339 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.339 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.339 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.339 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.339 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.339 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.339 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.598 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.598 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.598 13:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.598 [2024-12-06 13:19:18.919780] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:12.598 [2024-12-06 13:19:18.919853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.598 [2024-12-06 13:19:18.919877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.598 [2024-12-06 13:19:18.919892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.598 [2024-12-06 13:19:18.919902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.598 [2024-12-06 13:19:18.919912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.598 [2024-12-06 13:19:18.919921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.598 [2024-12-06 13:19:18.919931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.598 [2024-12-06 13:19:18.919953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.598 [2024-12-06 13:19:18.919966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.598 [2024-12-06 13:19:18.919976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.598 [2024-12-06 13:19:18.919985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1290 is same with the state(6) to be set 00:25:12.598 [2024-12-06 13:19:18.929774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b1290 (9): Bad file descriptor 00:25:12.598 [2024-12-06 13:19:18.939808] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.598 [2024-12-06 13:19:18.939842] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.598 [2024-12-06 13:19:18.939850] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.598 [2024-12-06 13:19:18.939856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.598 [2024-12-06 13:19:18.939902] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.535 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.535 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.535 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.535 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.535 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.535 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.535 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.535 [2024-12-06 13:19:19.963116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:13.535 [2024-12-06 13:19:19.963294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1290 with addr=10.0.0.3, port=4420 00:25:13.535 [2024-12-06 13:19:19.963334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1290 is same with the state(6) to be set 00:25:13.535 [2024-12-06 13:19:19.963408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b1290 (9): Bad file descriptor 00:25:13.535 [2024-12-06 13:19:19.964370] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:13.535 [2024-12-06 13:19:19.964476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.535 [2024-12-06 13:19:19.964505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.535 [2024-12-06 13:19:19.964528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.536 [2024-12-06 13:19:19.964550] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.536 [2024-12-06 13:19:19.964594] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.536 [2024-12-06 13:19:19.964615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.536 [2024-12-06 13:19:19.964640] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.536 [2024-12-06 13:19:19.964654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.536 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.536 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.536 13:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.472 [2024-12-06 13:19:20.964740] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:14.472 [2024-12-06 13:19:20.964788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:14.472 [2024-12-06 13:19:20.964821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:14.472 [2024-12-06 13:19:20.964833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:14.472 [2024-12-06 13:19:20.964844] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:14.472 [2024-12-06 13:19:20.964854] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:14.472 [2024-12-06 13:19:20.964861] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:14.472 [2024-12-06 13:19:20.964867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:14.472 [2024-12-06 13:19:20.964901] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:25:14.472 [2024-12-06 13:19:20.964975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.472 [2024-12-06 13:19:20.964993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.472 [2024-12-06 13:19:20.965007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.472 [2024-12-06 13:19:20.965018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.472 [2024-12-06 13:19:20.965029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.472 [2024-12-06 13:19:20.965038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.472 [2024-12-06 13:19:20.965049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.472 [2024-12-06 13:19:20.965058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.472 [2024-12-06 13:19:20.965069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.472 [2024-12-06 13:19:20.965079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.472 [2024-12-06 13:19:20.965089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:14.472 [2024-12-06 13:19:20.965516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12dd820 (9): Bad file descriptor 00:25:14.472 [2024-12-06 13:19:20.966530] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:14.472 [2024-12-06 13:19:20.966549] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:14.472 13:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.472 13:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.472 13:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.472 13:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.472 13:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.472 13:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.472 13:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:14.730 13:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:15.785 13:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:16.718 [2024-12-06 13:19:22.970177] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:16.718 [2024-12-06 13:19:22.970218] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:16.718 [2024-12-06 13:19:22.970240] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:16.718 [2024-12-06 13:19:23.056315] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:25:16.718 [2024-12-06 13:19:23.110834] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:25:16.718 [2024-12-06 13:19:23.111619] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1329a50:1 started. 00:25:16.718 [2024-12-06 13:19:23.112845] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:16.718 [2024-12-06 13:19:23.112897] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:16.718 [2024-12-06 13:19:23.112921] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:16.718 [2024-12-06 13:19:23.112939] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:25:16.718 [2024-12-06 13:19:23.112967] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:16.718 [2024-12-06 13:19:23.118967] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1329a50 was disconnected and freed. delete nvme_qpair. 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91476 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91476 ']' 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91476 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.719 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91476 00:25:16.976 killing process with pid 91476 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91476' 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91476 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91476 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.976 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.977 rmmod nvme_tcp 00:25:16.977 rmmod nvme_fabrics 00:25:16.977 rmmod nvme_keyring 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91445 ']' 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91445 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91445 ']' 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91445 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.977 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91445 00:25:17.234 killing process with pid 91445 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91445' 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91445 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91445 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:17.234 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:25:17.492 00:25:17.492 real 0m13.166s 00:25:17.492 user 0m23.169s 00:25:17.492 sys 0m1.595s 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.492 ************************************ 00:25:17.492 END TEST nvmf_discovery_remove_ifc 00:25:17.492 ************************************ 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.492 ************************************ 00:25:17.492 START TEST nvmf_identify_kernel_target 00:25:17.492 ************************************ 00:25:17.492 13:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:17.750 * Looking for test storage... 00:25:17.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:17.750 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.751 --rc genhtml_branch_coverage=1 00:25:17.751 --rc genhtml_function_coverage=1 00:25:17.751 --rc genhtml_legend=1 00:25:17.751 --rc geninfo_all_blocks=1 00:25:17.751 --rc geninfo_unexecuted_blocks=1 00:25:17.751 00:25:17.751 ' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.751 --rc genhtml_branch_coverage=1 00:25:17.751 --rc genhtml_function_coverage=1 00:25:17.751 --rc genhtml_legend=1 00:25:17.751 --rc geninfo_all_blocks=1 00:25:17.751 --rc geninfo_unexecuted_blocks=1 00:25:17.751 00:25:17.751 ' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.751 --rc genhtml_branch_coverage=1 00:25:17.751 --rc genhtml_function_coverage=1 00:25:17.751 --rc genhtml_legend=1 00:25:17.751 --rc geninfo_all_blocks=1 00:25:17.751 --rc geninfo_unexecuted_blocks=1 00:25:17.751 00:25:17.751 ' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.751 --rc genhtml_branch_coverage=1 00:25:17.751 --rc genhtml_function_coverage=1 00:25:17.751 --rc genhtml_legend=1 00:25:17.751 --rc geninfo_all_blocks=1 00:25:17.751 --rc geninfo_unexecuted_blocks=1 00:25:17.751 00:25:17.751 ' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:17.751 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:17.751 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:17.752 Cannot find device "nvmf_init_br" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:17.752 Cannot find device "nvmf_init_br2" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:17.752 Cannot find device "nvmf_tgt_br" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:17.752 Cannot find device "nvmf_tgt_br2" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:17.752 Cannot find device "nvmf_init_br" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:17.752 Cannot find device "nvmf_init_br2" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:17.752 Cannot find device "nvmf_tgt_br" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:17.752 Cannot find device "nvmf_tgt_br2" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:17.752 Cannot find device "nvmf_br" 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:25:17.752 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:18.010 Cannot find device "nvmf_init_if" 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:18.010 Cannot find device "nvmf_init_if2" 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:18.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:18.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:18.010 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:18.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:18.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:25:18.011 00:25:18.011 --- 10.0.0.3 ping statistics --- 00:25:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.011 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:18.011 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:18.011 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:25:18.011 00:25:18.011 --- 10.0.0.4 ping statistics --- 00:25:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.011 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:18.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:18.011 00:25:18.011 --- 10.0.0.1 ping statistics --- 00:25:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.011 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:18.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:25:18.011 00:25:18.011 --- 10.0.0.2 ping statistics --- 00:25:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.011 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:18.011 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:18.269 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:18.269 13:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:18.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:18.527 Waiting for block devices as requested 00:25:18.527 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:18.785 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:18.785 No valid GPT data, bailing 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:18.785 No valid GPT data, bailing 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:18.785 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:19.043 No valid GPT data, bailing 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:19.043 No valid GPT data, bailing 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -a 10.0.0.1 -t tcp -s 4420 00:25:19.043 00:25:19.043 Discovery Log Number of Records 2, Generation counter 2 00:25:19.043 =====Discovery Log Entry 0====== 00:25:19.043 trtype: tcp 00:25:19.043 adrfam: ipv4 00:25:19.043 subtype: current discovery subsystem 00:25:19.043 treq: not specified, sq flow control disable supported 00:25:19.043 portid: 1 00:25:19.043 trsvcid: 4420 00:25:19.043 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:19.043 traddr: 10.0.0.1 00:25:19.043 eflags: none 00:25:19.043 sectype: none 00:25:19.043 =====Discovery Log Entry 1====== 00:25:19.043 trtype: tcp 00:25:19.043 adrfam: ipv4 00:25:19.043 subtype: nvme subsystem 00:25:19.043 treq: not specified, sq flow control disable supported 00:25:19.043 portid: 1 00:25:19.043 trsvcid: 4420 00:25:19.043 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:19.043 traddr: 10.0.0.1 00:25:19.043 eflags: none 00:25:19.043 sectype: none 00:25:19.043 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:19.043 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:19.301 ===================================================== 00:25:19.301 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:19.301 ===================================================== 00:25:19.301 Controller Capabilities/Features 00:25:19.301 ================================ 00:25:19.301 Vendor ID: 0000 00:25:19.301 Subsystem Vendor ID: 0000 00:25:19.301 Serial Number: c3d27e9be0e754a850bc 00:25:19.301 Model Number: Linux 00:25:19.301 Firmware Version: 6.8.9-20 00:25:19.301 Recommended Arb Burst: 0 00:25:19.301 IEEE OUI Identifier: 00 00 00 00:25:19.301 Multi-path I/O 00:25:19.301 May have multiple subsystem ports: No 00:25:19.301 May have multiple controllers: No 00:25:19.301 Associated with SR-IOV VF: No 00:25:19.301 Max Data Transfer Size: Unlimited 00:25:19.301 Max Number of Namespaces: 0 00:25:19.301 Max Number of I/O Queues: 1024 00:25:19.301 NVMe Specification Version (VS): 1.3 00:25:19.301 NVMe Specification Version (Identify): 1.3 00:25:19.301 Maximum Queue Entries: 1024 00:25:19.301 Contiguous Queues Required: No 00:25:19.301 Arbitration Mechanisms Supported 00:25:19.301 Weighted Round Robin: Not Supported 00:25:19.301 Vendor Specific: Not Supported 00:25:19.301 Reset Timeout: 7500 ms 00:25:19.301 Doorbell Stride: 4 bytes 00:25:19.301 NVM Subsystem Reset: Not Supported 00:25:19.302 Command Sets Supported 00:25:19.302 NVM Command Set: Supported 00:25:19.302 Boot Partition: Not Supported 00:25:19.302 Memory Page Size Minimum: 4096 bytes 00:25:19.302 Memory Page Size Maximum: 4096 bytes 00:25:19.302 Persistent Memory Region: Not Supported 00:25:19.302 Optional Asynchronous Events Supported 00:25:19.302 Namespace Attribute Notices: Not Supported 00:25:19.302 Firmware Activation Notices: Not Supported 00:25:19.302 ANA Change Notices: Not Supported 00:25:19.302 PLE Aggregate Log Change Notices: Not Supported 00:25:19.302 LBA Status Info Alert Notices: Not Supported 00:25:19.302 EGE Aggregate Log Change Notices: Not Supported 00:25:19.302 Normal NVM Subsystem Shutdown event: Not Supported 00:25:19.302 Zone Descriptor Change Notices: Not Supported 00:25:19.302 Discovery Log Change Notices: Supported 00:25:19.302 Controller Attributes 00:25:19.302 128-bit Host Identifier: Not Supported 00:25:19.302 Non-Operational Permissive Mode: Not Supported 00:25:19.302 NVM Sets: Not Supported 00:25:19.302 Read Recovery Levels: Not Supported 00:25:19.302 Endurance Groups: Not Supported 00:25:19.302 Predictable Latency Mode: Not Supported 00:25:19.302 Traffic Based Keep ALive: Not Supported 00:25:19.302 Namespace Granularity: Not Supported 00:25:19.302 SQ Associations: Not Supported 00:25:19.302 UUID List: Not Supported 00:25:19.302 Multi-Domain Subsystem: Not Supported 00:25:19.302 Fixed Capacity Management: Not Supported 00:25:19.302 Variable Capacity Management: Not Supported 00:25:19.302 Delete Endurance Group: Not Supported 00:25:19.302 Delete NVM Set: Not Supported 00:25:19.302 Extended LBA Formats Supported: Not Supported 00:25:19.302 Flexible Data Placement Supported: Not Supported 00:25:19.302 00:25:19.302 Controller Memory Buffer Support 00:25:19.302 ================================ 00:25:19.302 Supported: No 00:25:19.302 00:25:19.302 Persistent Memory Region Support 00:25:19.302 ================================ 00:25:19.302 Supported: No 00:25:19.302 00:25:19.302 Admin Command Set Attributes 00:25:19.302 ============================ 00:25:19.302 Security Send/Receive: Not Supported 00:25:19.302 Format NVM: Not Supported 00:25:19.302 Firmware Activate/Download: Not Supported 00:25:19.302 Namespace Management: Not Supported 00:25:19.302 Device Self-Test: Not Supported 00:25:19.302 Directives: Not Supported 00:25:19.302 NVMe-MI: Not Supported 00:25:19.302 Virtualization Management: Not Supported 00:25:19.302 Doorbell Buffer Config: Not Supported 00:25:19.302 Get LBA Status Capability: Not Supported 00:25:19.302 Command & Feature Lockdown Capability: Not Supported 00:25:19.302 Abort Command Limit: 1 00:25:19.302 Async Event Request Limit: 1 00:25:19.302 Number of Firmware Slots: N/A 00:25:19.302 Firmware Slot 1 Read-Only: N/A 00:25:19.302 Firmware Activation Without Reset: N/A 00:25:19.302 Multiple Update Detection Support: N/A 00:25:19.302 Firmware Update Granularity: No Information Provided 00:25:19.302 Per-Namespace SMART Log: No 00:25:19.302 Asymmetric Namespace Access Log Page: Not Supported 00:25:19.302 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:19.302 Command Effects Log Page: Not Supported 00:25:19.302 Get Log Page Extended Data: Supported 00:25:19.302 Telemetry Log Pages: Not Supported 00:25:19.302 Persistent Event Log Pages: Not Supported 00:25:19.302 Supported Log Pages Log Page: May Support 00:25:19.302 Commands Supported & Effects Log Page: Not Supported 00:25:19.302 Feature Identifiers & Effects Log Page:May Support 00:25:19.302 NVMe-MI Commands & Effects Log Page: May Support 00:25:19.302 Data Area 4 for Telemetry Log: Not Supported 00:25:19.302 Error Log Page Entries Supported: 1 00:25:19.302 Keep Alive: Not Supported 00:25:19.302 00:25:19.302 NVM Command Set Attributes 00:25:19.302 ========================== 00:25:19.302 Submission Queue Entry Size 00:25:19.302 Max: 1 00:25:19.302 Min: 1 00:25:19.302 Completion Queue Entry Size 00:25:19.302 Max: 1 00:25:19.302 Min: 1 00:25:19.302 Number of Namespaces: 0 00:25:19.302 Compare Command: Not Supported 00:25:19.302 Write Uncorrectable Command: Not Supported 00:25:19.302 Dataset Management Command: Not Supported 00:25:19.302 Write Zeroes Command: Not Supported 00:25:19.302 Set Features Save Field: Not Supported 00:25:19.302 Reservations: Not Supported 00:25:19.302 Timestamp: Not Supported 00:25:19.302 Copy: Not Supported 00:25:19.302 Volatile Write Cache: Not Present 00:25:19.302 Atomic Write Unit (Normal): 1 00:25:19.302 Atomic Write Unit (PFail): 1 00:25:19.302 Atomic Compare & Write Unit: 1 00:25:19.302 Fused Compare & Write: Not Supported 00:25:19.302 Scatter-Gather List 00:25:19.302 SGL Command Set: Supported 00:25:19.302 SGL Keyed: Not Supported 00:25:19.302 SGL Bit Bucket Descriptor: Not Supported 00:25:19.302 SGL Metadata Pointer: Not Supported 00:25:19.302 Oversized SGL: Not Supported 00:25:19.302 SGL Metadata Address: Not Supported 00:25:19.302 SGL Offset: Supported 00:25:19.302 Transport SGL Data Block: Not Supported 00:25:19.302 Replay Protected Memory Block: Not Supported 00:25:19.302 00:25:19.302 Firmware Slot Information 00:25:19.302 ========================= 00:25:19.302 Active slot: 0 00:25:19.302 00:25:19.302 00:25:19.302 Error Log 00:25:19.302 ========= 00:25:19.302 00:25:19.302 Active Namespaces 00:25:19.302 ================= 00:25:19.302 Discovery Log Page 00:25:19.302 ================== 00:25:19.302 Generation Counter: 2 00:25:19.302 Number of Records: 2 00:25:19.302 Record Format: 0 00:25:19.302 00:25:19.302 Discovery Log Entry 0 00:25:19.302 ---------------------- 00:25:19.302 Transport Type: 3 (TCP) 00:25:19.302 Address Family: 1 (IPv4) 00:25:19.302 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:19.302 Entry Flags: 00:25:19.302 Duplicate Returned Information: 0 00:25:19.302 Explicit Persistent Connection Support for Discovery: 0 00:25:19.302 Transport Requirements: 00:25:19.302 Secure Channel: Not Specified 00:25:19.302 Port ID: 1 (0x0001) 00:25:19.302 Controller ID: 65535 (0xffff) 00:25:19.302 Admin Max SQ Size: 32 00:25:19.302 Transport Service Identifier: 4420 00:25:19.302 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:19.302 Transport Address: 10.0.0.1 00:25:19.302 Discovery Log Entry 1 00:25:19.302 ---------------------- 00:25:19.302 Transport Type: 3 (TCP) 00:25:19.302 Address Family: 1 (IPv4) 00:25:19.303 Subsystem Type: 2 (NVM Subsystem) 00:25:19.303 Entry Flags: 00:25:19.303 Duplicate Returned Information: 0 00:25:19.303 Explicit Persistent Connection Support for Discovery: 0 00:25:19.303 Transport Requirements: 00:25:19.303 Secure Channel: Not Specified 00:25:19.303 Port ID: 1 (0x0001) 00:25:19.303 Controller ID: 65535 (0xffff) 00:25:19.303 Admin Max SQ Size: 32 00:25:19.303 Transport Service Identifier: 4420 00:25:19.303 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:19.303 Transport Address: 10.0.0.1 00:25:19.303 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:19.563 get_feature(0x01) failed 00:25:19.563 get_feature(0x02) failed 00:25:19.563 get_feature(0x04) failed 00:25:19.563 ===================================================== 00:25:19.563 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:19.563 ===================================================== 00:25:19.563 Controller Capabilities/Features 00:25:19.563 ================================ 00:25:19.563 Vendor ID: 0000 00:25:19.563 Subsystem Vendor ID: 0000 00:25:19.563 Serial Number: 4fe1a9169a5f1ff1aee9 00:25:19.563 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:19.563 Firmware Version: 6.8.9-20 00:25:19.563 Recommended Arb Burst: 6 00:25:19.563 IEEE OUI Identifier: 00 00 00 00:25:19.563 Multi-path I/O 00:25:19.563 May have multiple subsystem ports: Yes 00:25:19.563 May have multiple controllers: Yes 00:25:19.563 Associated with SR-IOV VF: No 00:25:19.563 Max Data Transfer Size: Unlimited 00:25:19.563 Max Number of Namespaces: 1024 00:25:19.563 Max Number of I/O Queues: 128 00:25:19.563 NVMe Specification Version (VS): 1.3 00:25:19.563 NVMe Specification Version (Identify): 1.3 00:25:19.563 Maximum Queue Entries: 1024 00:25:19.563 Contiguous Queues Required: No 00:25:19.563 Arbitration Mechanisms Supported 00:25:19.563 Weighted Round Robin: Not Supported 00:25:19.563 Vendor Specific: Not Supported 00:25:19.563 Reset Timeout: 7500 ms 00:25:19.563 Doorbell Stride: 4 bytes 00:25:19.563 NVM Subsystem Reset: Not Supported 00:25:19.563 Command Sets Supported 00:25:19.563 NVM Command Set: Supported 00:25:19.563 Boot Partition: Not Supported 00:25:19.563 Memory Page Size Minimum: 4096 bytes 00:25:19.563 Memory Page Size Maximum: 4096 bytes 00:25:19.563 Persistent Memory Region: Not Supported 00:25:19.563 Optional Asynchronous Events Supported 00:25:19.563 Namespace Attribute Notices: Supported 00:25:19.563 Firmware Activation Notices: Not Supported 00:25:19.563 ANA Change Notices: Supported 00:25:19.563 PLE Aggregate Log Change Notices: Not Supported 00:25:19.563 LBA Status Info Alert Notices: Not Supported 00:25:19.563 EGE Aggregate Log Change Notices: Not Supported 00:25:19.563 Normal NVM Subsystem Shutdown event: Not Supported 00:25:19.563 Zone Descriptor Change Notices: Not Supported 00:25:19.563 Discovery Log Change Notices: Not Supported 00:25:19.563 Controller Attributes 00:25:19.563 128-bit Host Identifier: Supported 00:25:19.563 Non-Operational Permissive Mode: Not Supported 00:25:19.563 NVM Sets: Not Supported 00:25:19.563 Read Recovery Levels: Not Supported 00:25:19.563 Endurance Groups: Not Supported 00:25:19.563 Predictable Latency Mode: Not Supported 00:25:19.563 Traffic Based Keep ALive: Supported 00:25:19.563 Namespace Granularity: Not Supported 00:25:19.563 SQ Associations: Not Supported 00:25:19.563 UUID List: Not Supported 00:25:19.563 Multi-Domain Subsystem: Not Supported 00:25:19.563 Fixed Capacity Management: Not Supported 00:25:19.563 Variable Capacity Management: Not Supported 00:25:19.563 Delete Endurance Group: Not Supported 00:25:19.563 Delete NVM Set: Not Supported 00:25:19.563 Extended LBA Formats Supported: Not Supported 00:25:19.563 Flexible Data Placement Supported: Not Supported 00:25:19.563 00:25:19.563 Controller Memory Buffer Support 00:25:19.563 ================================ 00:25:19.563 Supported: No 00:25:19.563 00:25:19.563 Persistent Memory Region Support 00:25:19.563 ================================ 00:25:19.563 Supported: No 00:25:19.563 00:25:19.563 Admin Command Set Attributes 00:25:19.563 ============================ 00:25:19.563 Security Send/Receive: Not Supported 00:25:19.563 Format NVM: Not Supported 00:25:19.563 Firmware Activate/Download: Not Supported 00:25:19.563 Namespace Management: Not Supported 00:25:19.563 Device Self-Test: Not Supported 00:25:19.563 Directives: Not Supported 00:25:19.563 NVMe-MI: Not Supported 00:25:19.563 Virtualization Management: Not Supported 00:25:19.563 Doorbell Buffer Config: Not Supported 00:25:19.563 Get LBA Status Capability: Not Supported 00:25:19.563 Command & Feature Lockdown Capability: Not Supported 00:25:19.563 Abort Command Limit: 4 00:25:19.563 Async Event Request Limit: 4 00:25:19.563 Number of Firmware Slots: N/A 00:25:19.563 Firmware Slot 1 Read-Only: N/A 00:25:19.563 Firmware Activation Without Reset: N/A 00:25:19.563 Multiple Update Detection Support: N/A 00:25:19.563 Firmware Update Granularity: No Information Provided 00:25:19.563 Per-Namespace SMART Log: Yes 00:25:19.563 Asymmetric Namespace Access Log Page: Supported 00:25:19.563 ANA Transition Time : 10 sec 00:25:19.563 00:25:19.563 Asymmetric Namespace Access Capabilities 00:25:19.563 ANA Optimized State : Supported 00:25:19.563 ANA Non-Optimized State : Supported 00:25:19.563 ANA Inaccessible State : Supported 00:25:19.563 ANA Persistent Loss State : Supported 00:25:19.563 ANA Change State : Supported 00:25:19.563 ANAGRPID is not changed : No 00:25:19.563 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:19.563 00:25:19.563 ANA Group Identifier Maximum : 128 00:25:19.563 Number of ANA Group Identifiers : 128 00:25:19.563 Max Number of Allowed Namespaces : 1024 00:25:19.563 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:19.563 Command Effects Log Page: Supported 00:25:19.563 Get Log Page Extended Data: Supported 00:25:19.564 Telemetry Log Pages: Not Supported 00:25:19.564 Persistent Event Log Pages: Not Supported 00:25:19.564 Supported Log Pages Log Page: May Support 00:25:19.564 Commands Supported & Effects Log Page: Not Supported 00:25:19.564 Feature Identifiers & Effects Log Page:May Support 00:25:19.564 NVMe-MI Commands & Effects Log Page: May Support 00:25:19.564 Data Area 4 for Telemetry Log: Not Supported 00:25:19.564 Error Log Page Entries Supported: 128 00:25:19.564 Keep Alive: Supported 00:25:19.564 Keep Alive Granularity: 1000 ms 00:25:19.564 00:25:19.564 NVM Command Set Attributes 00:25:19.564 ========================== 00:25:19.564 Submission Queue Entry Size 00:25:19.564 Max: 64 00:25:19.564 Min: 64 00:25:19.564 Completion Queue Entry Size 00:25:19.564 Max: 16 00:25:19.564 Min: 16 00:25:19.564 Number of Namespaces: 1024 00:25:19.564 Compare Command: Not Supported 00:25:19.564 Write Uncorrectable Command: Not Supported 00:25:19.564 Dataset Management Command: Supported 00:25:19.564 Write Zeroes Command: Supported 00:25:19.564 Set Features Save Field: Not Supported 00:25:19.564 Reservations: Not Supported 00:25:19.564 Timestamp: Not Supported 00:25:19.564 Copy: Not Supported 00:25:19.564 Volatile Write Cache: Present 00:25:19.564 Atomic Write Unit (Normal): 1 00:25:19.564 Atomic Write Unit (PFail): 1 00:25:19.564 Atomic Compare & Write Unit: 1 00:25:19.564 Fused Compare & Write: Not Supported 00:25:19.564 Scatter-Gather List 00:25:19.564 SGL Command Set: Supported 00:25:19.564 SGL Keyed: Not Supported 00:25:19.564 SGL Bit Bucket Descriptor: Not Supported 00:25:19.564 SGL Metadata Pointer: Not Supported 00:25:19.564 Oversized SGL: Not Supported 00:25:19.564 SGL Metadata Address: Not Supported 00:25:19.564 SGL Offset: Supported 00:25:19.564 Transport SGL Data Block: Not Supported 00:25:19.564 Replay Protected Memory Block: Not Supported 00:25:19.564 00:25:19.564 Firmware Slot Information 00:25:19.564 ========================= 00:25:19.564 Active slot: 0 00:25:19.564 00:25:19.564 Asymmetric Namespace Access 00:25:19.564 =========================== 00:25:19.564 Change Count : 0 00:25:19.564 Number of ANA Group Descriptors : 1 00:25:19.564 ANA Group Descriptor : 0 00:25:19.564 ANA Group ID : 1 00:25:19.564 Number of NSID Values : 1 00:25:19.564 Change Count : 0 00:25:19.564 ANA State : 1 00:25:19.564 Namespace Identifier : 1 00:25:19.564 00:25:19.564 Commands Supported and Effects 00:25:19.564 ============================== 00:25:19.564 Admin Commands 00:25:19.564 -------------- 00:25:19.564 Get Log Page (02h): Supported 00:25:19.564 Identify (06h): Supported 00:25:19.564 Abort (08h): Supported 00:25:19.564 Set Features (09h): Supported 00:25:19.564 Get Features (0Ah): Supported 00:25:19.564 Asynchronous Event Request (0Ch): Supported 00:25:19.564 Keep Alive (18h): Supported 00:25:19.564 I/O Commands 00:25:19.564 ------------ 00:25:19.564 Flush (00h): Supported 00:25:19.564 Write (01h): Supported LBA-Change 00:25:19.564 Read (02h): Supported 00:25:19.564 Write Zeroes (08h): Supported LBA-Change 00:25:19.564 Dataset Management (09h): Supported 00:25:19.564 00:25:19.564 Error Log 00:25:19.564 ========= 00:25:19.564 Entry: 0 00:25:19.564 Error Count: 0x3 00:25:19.564 Submission Queue Id: 0x0 00:25:19.564 Command Id: 0x5 00:25:19.564 Phase Bit: 0 00:25:19.564 Status Code: 0x2 00:25:19.564 Status Code Type: 0x0 00:25:19.564 Do Not Retry: 1 00:25:19.564 Error Location: 0x28 00:25:19.564 LBA: 0x0 00:25:19.564 Namespace: 0x0 00:25:19.564 Vendor Log Page: 0x0 00:25:19.564 ----------- 00:25:19.564 Entry: 1 00:25:19.564 Error Count: 0x2 00:25:19.564 Submission Queue Id: 0x0 00:25:19.564 Command Id: 0x5 00:25:19.564 Phase Bit: 0 00:25:19.564 Status Code: 0x2 00:25:19.564 Status Code Type: 0x0 00:25:19.564 Do Not Retry: 1 00:25:19.564 Error Location: 0x28 00:25:19.564 LBA: 0x0 00:25:19.564 Namespace: 0x0 00:25:19.564 Vendor Log Page: 0x0 00:25:19.564 ----------- 00:25:19.564 Entry: 2 00:25:19.564 Error Count: 0x1 00:25:19.564 Submission Queue Id: 0x0 00:25:19.564 Command Id: 0x4 00:25:19.564 Phase Bit: 0 00:25:19.564 Status Code: 0x2 00:25:19.564 Status Code Type: 0x0 00:25:19.564 Do Not Retry: 1 00:25:19.564 Error Location: 0x28 00:25:19.564 LBA: 0x0 00:25:19.564 Namespace: 0x0 00:25:19.564 Vendor Log Page: 0x0 00:25:19.564 00:25:19.564 Number of Queues 00:25:19.564 ================ 00:25:19.564 Number of I/O Submission Queues: 128 00:25:19.564 Number of I/O Completion Queues: 128 00:25:19.564 00:25:19.564 ZNS Specific Controller Data 00:25:19.564 ============================ 00:25:19.564 Zone Append Size Limit: 0 00:25:19.564 00:25:19.564 00:25:19.564 Active Namespaces 00:25:19.564 ================= 00:25:19.564 get_feature(0x05) failed 00:25:19.564 Namespace ID:1 00:25:19.564 Command Set Identifier: NVM (00h) 00:25:19.564 Deallocate: Supported 00:25:19.564 Deallocated/Unwritten Error: Not Supported 00:25:19.564 Deallocated Read Value: Unknown 00:25:19.564 Deallocate in Write Zeroes: Not Supported 00:25:19.564 Deallocated Guard Field: 0xFFFF 00:25:19.564 Flush: Supported 00:25:19.564 Reservation: Not Supported 00:25:19.564 Namespace Sharing Capabilities: Multiple Controllers 00:25:19.564 Size (in LBAs): 1310720 (5GiB) 00:25:19.564 Capacity (in LBAs): 1310720 (5GiB) 00:25:19.564 Utilization (in LBAs): 1310720 (5GiB) 00:25:19.564 UUID: 5b41d2ec-2a20-4dd9-8b98-8c848e5929f2 00:25:19.564 Thin Provisioning: Not Supported 00:25:19.564 Per-NS Atomic Units: Yes 00:25:19.564 Atomic Boundary Size (Normal): 0 00:25:19.564 Atomic Boundary Size (PFail): 0 00:25:19.564 Atomic Boundary Offset: 0 00:25:19.564 NGUID/EUI64 Never Reused: No 00:25:19.564 ANA group ID: 1 00:25:19.564 Namespace Write Protected: No 00:25:19.564 Number of LBA Formats: 1 00:25:19.564 Current LBA Format: LBA Format #00 00:25:19.564 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:19.564 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.564 rmmod nvme_tcp 00:25:19.564 rmmod nvme_fabrics 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.564 13:19:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:19.564 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:19.823 13:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:20.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:20.758 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.758 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.758 00:25:20.758 real 0m3.219s 00:25:20.758 user 0m1.167s 00:25:20.758 sys 0m1.462s 00:25:20.758 13:19:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.758 ************************************ 00:25:20.758 END TEST nvmf_identify_kernel_target 00:25:20.758 ************************************ 00:25:20.758 13:19:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.758 13:19:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:20.758 13:19:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.758 13:19:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.758 13:19:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.758 ************************************ 00:25:20.758 START TEST nvmf_auth_host 00:25:20.758 ************************************ 00:25:20.758 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:21.020 * Looking for test storage... 00:25:21.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.020 --rc genhtml_branch_coverage=1 00:25:21.020 --rc genhtml_function_coverage=1 00:25:21.020 --rc genhtml_legend=1 00:25:21.020 --rc geninfo_all_blocks=1 00:25:21.020 --rc geninfo_unexecuted_blocks=1 00:25:21.020 00:25:21.020 ' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.020 --rc genhtml_branch_coverage=1 00:25:21.020 --rc genhtml_function_coverage=1 00:25:21.020 --rc genhtml_legend=1 00:25:21.020 --rc geninfo_all_blocks=1 00:25:21.020 --rc geninfo_unexecuted_blocks=1 00:25:21.020 00:25:21.020 ' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.020 --rc genhtml_branch_coverage=1 00:25:21.020 --rc genhtml_function_coverage=1 00:25:21.020 --rc genhtml_legend=1 00:25:21.020 --rc geninfo_all_blocks=1 00:25:21.020 --rc geninfo_unexecuted_blocks=1 00:25:21.020 00:25:21.020 ' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.020 --rc genhtml_branch_coverage=1 00:25:21.020 --rc genhtml_function_coverage=1 00:25:21.020 --rc genhtml_legend=1 00:25:21.020 --rc geninfo_all_blocks=1 00:25:21.020 --rc geninfo_unexecuted_blocks=1 00:25:21.020 00:25:21.020 ' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.020 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:21.020 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:21.021 Cannot find device "nvmf_init_br" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:21.021 Cannot find device "nvmf_init_br2" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:21.021 Cannot find device "nvmf_tgt_br" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:21.021 Cannot find device "nvmf_tgt_br2" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:21.021 Cannot find device "nvmf_init_br" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:21.021 Cannot find device "nvmf_init_br2" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:21.021 Cannot find device "nvmf_tgt_br" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:21.021 Cannot find device "nvmf_tgt_br2" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:21.021 Cannot find device "nvmf_br" 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:25:21.021 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:21.279 Cannot find device "nvmf_init_if" 00:25:21.279 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:25:21.279 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:21.279 Cannot find device "nvmf_init_if2" 00:25:21.279 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:25:21.279 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:21.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:21.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:21.280 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:21.538 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:21.538 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:25:21.538 00:25:21.538 --- 10.0.0.3 ping statistics --- 00:25:21.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.538 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:21.538 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:21.538 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:25:21.538 00:25:21.538 --- 10.0.0.4 ping statistics --- 00:25:21.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.538 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:21.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:21.538 00:25:21.538 --- 10.0.0.1 ping statistics --- 00:25:21.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.538 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:21.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:25:21.538 00:25:21.538 --- 10.0.0.2 ping statistics --- 00:25:21.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.538 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92475 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92475 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92475 ']' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.538 13:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=55ea645d1ad37625b6862c0b48510532 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wVT 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 55ea645d1ad37625b6862c0b48510532 0 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 55ea645d1ad37625b6862c0b48510532 0 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=55ea645d1ad37625b6862c0b48510532 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wVT 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wVT 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.wVT 00:25:21.796 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2f5ec65bf8c5655aaf907624c38156ca8fd109aa760e931f0b6ab5c3d92568ab 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kh1 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2f5ec65bf8c5655aaf907624c38156ca8fd109aa760e931f0b6ab5c3d92568ab 3 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2f5ec65bf8c5655aaf907624c38156ca8fd109aa760e931f0b6ab5c3d92568ab 3 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2f5ec65bf8c5655aaf907624c38156ca8fd109aa760e931f0b6ab5c3d92568ab 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kh1 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kh1 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kh1 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fd11ab9d8dd865d81dd9fd3f63040d2d14f51e660b360744 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3DX 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fd11ab9d8dd865d81dd9fd3f63040d2d14f51e660b360744 0 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fd11ab9d8dd865d81dd9fd3f63040d2d14f51e660b360744 0 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fd11ab9d8dd865d81dd9fd3f63040d2d14f51e660b360744 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3DX 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3DX 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3DX 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c81c1d7b38eab8142c811a59f13ec990ca2e11f9e8664fd2 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6ve 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c81c1d7b38eab8142c811a59f13ec990ca2e11f9e8664fd2 2 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c81c1d7b38eab8142c811a59f13ec990ca2e11f9e8664fd2 2 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c81c1d7b38eab8142c811a59f13ec990ca2e11f9e8664fd2 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6ve 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6ve 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6ve 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.053 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4896e1efd4526167c801166288ca27a 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dh7 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4896e1efd4526167c801166288ca27a 1 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4896e1efd4526167c801166288ca27a 1 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4896e1efd4526167c801166288ca27a 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:22.054 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dh7 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dh7 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.dh7 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8665562b71851571e031465cf6da7181 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zjH 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8665562b71851571e031465cf6da7181 1 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8665562b71851571e031465cf6da7181 1 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8665562b71851571e031465cf6da7181 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zjH 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zjH 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zjH 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=30cb868576d075f3e3d4954e36cea5269ce27f14f8f53574 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ejc 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 30cb868576d075f3e3d4954e36cea5269ce27f14f8f53574 2 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 30cb868576d075f3e3d4954e36cea5269ce27f14f8f53574 2 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=30cb868576d075f3e3d4954e36cea5269ce27f14f8f53574 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ejc 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ejc 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ejc 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fd53404a3c6502cee75348183cf55d41 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.R99 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fd53404a3c6502cee75348183cf55d41 0 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fd53404a3c6502cee75348183cf55d41 0 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fd53404a3c6502cee75348183cf55d41 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.R99 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.R99 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.R99 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e0ad21b748ad6c5a35681a28d9a05b1d697d535e141a354ec8e7f97ca15bfdd5 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.b1g 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e0ad21b748ad6c5a35681a28d9a05b1d697d535e141a354ec8e7f97ca15bfdd5 3 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e0ad21b748ad6c5a35681a28d9a05b1d697d535e141a354ec8e7f97ca15bfdd5 3 00:25:22.311 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:22.312 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:22.312 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e0ad21b748ad6c5a35681a28d9a05b1d697d535e141a354ec8e7f97ca15bfdd5 00:25:22.312 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:22.312 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.b1g 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.b1g 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.b1g 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92475 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92475 ']' 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.586 13:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wVT 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kh1 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kh1 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3DX 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6ve ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6ve 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.dh7 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zjH ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zjH 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ejc 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.R99 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.R99 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.b1g 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:22.843 13:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:23.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:23.131 Waiting for block devices as requested 00:25:23.421 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:23.421 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:23.988 No valid GPT data, bailing 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:23.988 No valid GPT data, bailing 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:23.988 No valid GPT data, bailing 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:23.988 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:24.248 No valid GPT data, bailing 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:24.248 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -a 10.0.0.1 -t tcp -s 4420 00:25:24.248 00:25:24.248 Discovery Log Number of Records 2, Generation counter 2 00:25:24.248 =====Discovery Log Entry 0====== 00:25:24.248 trtype: tcp 00:25:24.248 adrfam: ipv4 00:25:24.248 subtype: current discovery subsystem 00:25:24.248 treq: not specified, sq flow control disable supported 00:25:24.248 portid: 1 00:25:24.248 trsvcid: 4420 00:25:24.248 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:24.248 traddr: 10.0.0.1 00:25:24.248 eflags: none 00:25:24.248 sectype: none 00:25:24.248 =====Discovery Log Entry 1====== 00:25:24.248 trtype: tcp 00:25:24.248 adrfam: ipv4 00:25:24.248 subtype: nvme subsystem 00:25:24.248 treq: not specified, sq flow control disable supported 00:25:24.248 portid: 1 00:25:24.248 trsvcid: 4420 00:25:24.248 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:24.248 traddr: 10.0.0.1 00:25:24.248 eflags: none 00:25:24.248 sectype: none 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.249 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.508 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.509 nvme0n1 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.509 13:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.768 nvme0n1 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.768 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.769 nvme0n1 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.769 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.028 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.028 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.028 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 nvme0n1 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.288 nvme0n1 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.288 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 nvme0n1 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.289 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.548 13:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.807 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.808 nvme0n1 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.808 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 nvme0n1 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.067 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.068 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.326 nvme0n1 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.326 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.586 nvme0n1 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.586 13:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.586 nvme0n1 00:25:26.586 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.586 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.586 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.586 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.586 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.586 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.845 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.413 13:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.673 nvme0n1 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:27.673 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.674 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.932 nvme0n1 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.932 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.933 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.192 nvme0n1 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.192 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.452 nvme0n1 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.452 13:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.709 nvme0n1 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.709 13:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.236 nvme0n1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.236 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.494 nvme0n1 00:25:31.494 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.495 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.495 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.495 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.495 13:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.495 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.784 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.785 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.053 nvme0n1 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.053 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.054 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.620 nvme0n1 00:25:32.620 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.620 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.620 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.621 13:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.621 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.879 nvme0n1 00:25:32.879 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.879 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.879 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.879 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.879 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.879 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:33.137 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.138 13:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.704 nvme0n1 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.704 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.640 nvme0n1 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.640 13:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.205 nvme0n1 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.205 13:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.770 nvme0n1 00:25:35.770 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.770 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.770 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.770 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.770 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.033 13:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.607 nvme0n1 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.607 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.893 nvme0n1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.893 nvme0n1 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.893 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.152 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.153 nvme0n1 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.153 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.411 nvme0n1 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.411 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.412 nvme0n1 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.412 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.675 13:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.676 nvme0n1 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.676 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.935 nvme0n1 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.935 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.193 nvme0n1 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.194 nvme0n1 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.194 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 nvme0n1 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.452 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.738 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.739 13:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.739 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.740 nvme0n1 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.740 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.038 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.039 nvme0n1 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.039 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.298 nvme0n1 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.298 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.558 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.559 13:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.559 nvme0n1 00:25:39.559 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.559 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.559 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.559 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.559 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.559 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.818 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.819 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.078 nvme0n1 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:40.078 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.079 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.338 nvme0n1 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.338 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.597 13:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.855 nvme0n1 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.855 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.856 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.421 nvme0n1 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.421 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.422 13:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.679 nvme0n1 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.680 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.937 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 nvme0n1 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.196 13:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.133 nvme0n1 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.133 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.134 13:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.700 nvme0n1 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.700 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.637 nvme0n1 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.637 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.638 13:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.206 nvme0n1 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.206 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.207 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.207 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.207 13:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.142 nvme0n1 00:25:46.142 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.143 nvme0n1 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.143 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.144 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.144 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.446 nvme0n1 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.446 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.447 nvme0n1 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.447 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.706 13:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.706 nvme0n1 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:46.706 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.707 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.966 nvme0n1 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.966 nvme0n1 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.966 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.225 nvme0n1 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.225 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.226 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.484 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.485 nvme0n1 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.485 13:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.743 nvme0n1 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.743 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.744 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.744 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.003 nvme0n1 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.003 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.263 nvme0n1 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.263 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.522 nvme0n1 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.523 13:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.780 nvme0n1 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.780 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.781 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.039 nvme0n1 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.039 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.040 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.040 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.298 nvme0n1 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.298 13:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 nvme0n1 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.913 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 nvme0n1 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:50.172 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.173 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.741 nvme0n1 00:25:50.741 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.741 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.741 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.741 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.741 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.741 13:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.741 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.000 nvme0n1 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.000 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.567 nvme0n1 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVlYTY0NWQxYWQzNzYyNWI2ODYyYzBiNDg1MTA1MzLVVA/z: 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmY1ZWM2NWJmOGM1NjU1YWFmOTA3NjI0YzM4MTU2Y2E4ZmQxMDlhYTc2MGU5MzFmMGI2YWI1YzNkOTI1NjhhYpatzs8=: 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.567 13:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.136 nvme0n1 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.136 13:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.073 nvme0n1 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.073 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.074 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.642 nvme0n1 00:25:53.642 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.642 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.642 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.642 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.642 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.642 13:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzBjYjg2ODU3NmQwNzVmM2UzZDQ5NTRlMzZjZWE1MjY5Y2UyN2YxNGY4ZjUzNTc0NKcVlA==: 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmQ1MzQwNGEzYzY1MDJjZWU3NTM0ODE4M2NmNTVkNDEpIWcE: 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.642 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.209 nvme0n1 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.209 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.468 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.468 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBhZDIxYjc0OGFkNmM1YTM1NjgxYTI4ZDlhMDViMWQ2OTdkNTM1ZTE0MWEzNTRlYzhlN2Y5N2NhMTViZmRkNbih3l0=: 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.469 13:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.038 nvme0n1 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.038 2024/12/06 13:20:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:55.038 request: 00:25:55.038 { 00:25:55.038 "method": "bdev_nvme_attach_controller", 00:25:55.038 "params": { 00:25:55.038 "name": "nvme0", 00:25:55.038 "trtype": "tcp", 00:25:55.038 "traddr": "10.0.0.1", 00:25:55.038 "adrfam": "ipv4", 00:25:55.038 "trsvcid": "4420", 00:25:55.038 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:55.038 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:55.038 "prchk_reftag": false, 00:25:55.038 "prchk_guard": false, 00:25:55.038 "hdgst": false, 00:25:55.038 "ddgst": false, 00:25:55.038 "allow_unrecognized_csi": false 00:25:55.038 } 00:25:55.038 } 00:25:55.038 Got JSON-RPC error response 00:25:55.038 GoRPCClient: error on JSON-RPC call 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.038 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.039 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.298 2024/12/06 13:20:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:55.298 request: 00:25:55.298 { 00:25:55.298 "method": "bdev_nvme_attach_controller", 00:25:55.298 "params": { 00:25:55.298 "name": "nvme0", 00:25:55.298 "trtype": "tcp", 00:25:55.298 "traddr": "10.0.0.1", 00:25:55.298 "adrfam": "ipv4", 00:25:55.298 "trsvcid": "4420", 00:25:55.298 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:55.298 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:55.298 "prchk_reftag": false, 00:25:55.298 "prchk_guard": false, 00:25:55.298 "hdgst": false, 00:25:55.298 "ddgst": false, 00:25:55.298 "dhchap_key": "key2", 00:25:55.298 "allow_unrecognized_csi": false 00:25:55.298 } 00:25:55.298 } 00:25:55.298 Got JSON-RPC error response 00:25:55.298 GoRPCClient: error on JSON-RPC call 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.298 2024/12/06 13:20:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:55.298 request: 00:25:55.298 { 00:25:55.298 "method": "bdev_nvme_attach_controller", 00:25:55.298 "params": { 00:25:55.298 "name": "nvme0", 00:25:55.298 "trtype": "tcp", 00:25:55.298 "traddr": "10.0.0.1", 00:25:55.298 "adrfam": "ipv4", 00:25:55.298 "trsvcid": "4420", 00:25:55.298 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:55.298 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:55.298 "prchk_reftag": false, 00:25:55.298 "prchk_guard": false, 00:25:55.298 "hdgst": false, 00:25:55.298 "ddgst": false, 00:25:55.298 "dhchap_key": "key1", 00:25:55.298 "dhchap_ctrlr_key": "ckey2", 00:25:55.298 "allow_unrecognized_csi": false 00:25:55.298 } 00:25:55.298 } 00:25:55.298 Got JSON-RPC error response 00:25:55.298 GoRPCClient: error on JSON-RPC call 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.298 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.299 nvme0n1 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.299 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.557 2024/12/06 13:20:01 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-5 Msg=Input/output error 00:25:55.557 request: 00:25:55.557 { 00:25:55.557 "method": "bdev_nvme_set_keys", 00:25:55.557 "params": { 00:25:55.557 "name": "nvme0", 00:25:55.557 "dhchap_key": "key1", 00:25:55.557 "dhchap_ctrlr_key": "ckey2" 00:25:55.557 } 00:25:55.557 } 00:25:55.557 Got JSON-RPC error response 00:25:55.557 GoRPCClient: error on JSON-RPC call 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:55.557 13:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQxMWFiOWQ4ZGQ4NjVkODFkZDlmZDNmNjMwNDBkMmQxNGY1MWU2NjBiMzYwNzQ0FkuwFw==: 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: ]] 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgxYzFkN2IzOGVhYjgxNDJjODExYTU5ZjEzZWM5OTBjYTJlMTFmOWU4NjY0ZmQyaF3Zdw==: 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.493 13:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.751 nvme0n1 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ4OTZlMWVmZDQ1MjYxNjdjODAxMTY2Mjg4Y2EyN2FPAdq9: 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: ]] 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODY2NTU2MmI3MTg1MTU3MWUwMzE0NjVjZjZkYTcxODHnjlKy: 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:56.751 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.752 2024/12/06 13:20:03 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:25:56.752 request: 00:25:56.752 { 00:25:56.752 "method": "bdev_nvme_set_keys", 00:25:56.752 "params": { 00:25:56.752 "name": "nvme0", 00:25:56.752 "dhchap_key": "key2", 00:25:56.752 "dhchap_ctrlr_key": "ckey1" 00:25:56.752 } 00:25:56.752 } 00:25:56.752 Got JSON-RPC error response 00:25:56.752 GoRPCClient: error on JSON-RPC call 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:56.752 13:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:57.684 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.684 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:57.684 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.684 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.684 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.940 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:57.940 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.941 rmmod nvme_tcp 00:25:57.941 rmmod nvme_fabrics 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92475 ']' 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92475 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92475 ']' 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92475 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92475 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.941 killing process with pid 92475 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92475' 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92475 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92475 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:57.941 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:58.197 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:58.454 13:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:59.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.019 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:59.019 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:59.277 13:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wVT /tmp/spdk.key-null.3DX /tmp/spdk.key-sha256.dh7 /tmp/spdk.key-sha384.ejc /tmp/spdk.key-sha512.b1g /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:25:59.277 13:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:59.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.534 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.534 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.534 00:25:59.534 real 0m38.778s 00:25:59.534 user 0m34.859s 00:25:59.534 sys 0m3.747s 00:25:59.534 13:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.534 13:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.534 ************************************ 00:25:59.534 END TEST nvmf_auth_host 00:25:59.534 ************************************ 00:25:59.534 13:20:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:59.534 13:20:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:59.534 13:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.534 13:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.534 13:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.792 ************************************ 00:25:59.792 START TEST nvmf_digest 00:25:59.792 ************************************ 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:59.792 * Looking for test storage... 00:25:59.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:59.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.792 --rc genhtml_branch_coverage=1 00:25:59.792 --rc genhtml_function_coverage=1 00:25:59.792 --rc genhtml_legend=1 00:25:59.792 --rc geninfo_all_blocks=1 00:25:59.792 --rc geninfo_unexecuted_blocks=1 00:25:59.792 00:25:59.792 ' 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:59.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.792 --rc genhtml_branch_coverage=1 00:25:59.792 --rc genhtml_function_coverage=1 00:25:59.792 --rc genhtml_legend=1 00:25:59.792 --rc geninfo_all_blocks=1 00:25:59.792 --rc geninfo_unexecuted_blocks=1 00:25:59.792 00:25:59.792 ' 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:59.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.792 --rc genhtml_branch_coverage=1 00:25:59.792 --rc genhtml_function_coverage=1 00:25:59.792 --rc genhtml_legend=1 00:25:59.792 --rc geninfo_all_blocks=1 00:25:59.792 --rc geninfo_unexecuted_blocks=1 00:25:59.792 00:25:59.792 ' 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:59.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.792 --rc genhtml_branch_coverage=1 00:25:59.792 --rc genhtml_function_coverage=1 00:25:59.792 --rc genhtml_legend=1 00:25:59.792 --rc geninfo_all_blocks=1 00:25:59.792 --rc geninfo_unexecuted_blocks=1 00:25:59.792 00:25:59.792 ' 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.792 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:59.793 Cannot find device "nvmf_init_br" 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:25:59.793 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:59.793 Cannot find device "nvmf_init_br2" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:00.052 Cannot find device "nvmf_tgt_br" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:00.052 Cannot find device "nvmf_tgt_br2" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:00.052 Cannot find device "nvmf_init_br" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:00.052 Cannot find device "nvmf_init_br2" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:00.052 Cannot find device "nvmf_tgt_br" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:00.052 Cannot find device "nvmf_tgt_br2" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:00.052 Cannot find device "nvmf_br" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:00.052 Cannot find device "nvmf_init_if" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:00.052 Cannot find device "nvmf_init_if2" 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:00.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:00.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:00.052 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:00.312 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:00.312 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:26:00.312 00:26:00.312 --- 10.0.0.3 ping statistics --- 00:26:00.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.312 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:00.312 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:00.312 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:26:00.312 00:26:00.312 --- 10.0.0.4 ping statistics --- 00:26:00.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.312 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:00.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:00.312 00:26:00.312 --- 10.0.0.1 ping statistics --- 00:26:00.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.312 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:00.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:26:00.312 00:26:00.312 --- 10.0.0.2 ping statistics --- 00:26:00.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.312 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:00.312 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:00.313 ************************************ 00:26:00.313 START TEST nvmf_digest_clean 00:26:00.313 ************************************ 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94154 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94154 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94154 ']' 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.313 13:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.313 [2024-12-06 13:20:06.776009] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:00.313 [2024-12-06 13:20:06.776110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.572 [2024-12-06 13:20:06.929663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.572 [2024-12-06 13:20:06.966844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.572 [2024-12-06 13:20:06.966913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.572 [2024-12-06 13:20:06.966928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.572 [2024-12-06 13:20:06.966938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.572 [2024-12-06 13:20:06.966965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.572 [2024-12-06 13:20:06.967348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.572 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.831 null0 00:26:00.831 [2024-12-06 13:20:07.159398] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.831 [2024-12-06 13:20:07.183542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94185 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94185 /var/tmp/bperf.sock 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94185 ']' 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.831 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.831 [2024-12-06 13:20:07.249262] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:00.831 [2024-12-06 13:20:07.249564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94185 ] 00:26:01.090 [2024-12-06 13:20:07.405321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.090 [2024-12-06 13:20:07.450133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.090 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.090 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:01.090 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:01.090 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:01.090 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:01.348 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.348 13:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.916 nvme0n1 00:26:01.916 13:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:01.916 13:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:01.916 Running I/O for 2 seconds... 00:26:04.278 17408.00 IOPS, 68.00 MiB/s [2024-12-06T13:20:10.806Z] 17585.00 IOPS, 68.69 MiB/s 00:26:04.278 Latency(us) 00:26:04.278 [2024-12-06T13:20:10.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.278 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:04.278 nvme0n1 : 2.00 17609.02 68.79 0.00 0.00 7260.74 4170.47 17396.83 00:26:04.278 [2024-12-06T13:20:10.806Z] =================================================================================================================== 00:26:04.278 [2024-12-06T13:20:10.806Z] Total : 17609.02 68.79 0.00 0.00 7260.74 4170.47 17396.83 00:26:04.278 { 00:26:04.278 "results": [ 00:26:04.278 { 00:26:04.278 "job": "nvme0n1", 00:26:04.278 "core_mask": "0x2", 00:26:04.278 "workload": "randread", 00:26:04.278 "status": "finished", 00:26:04.278 "queue_depth": 128, 00:26:04.278 "io_size": 4096, 00:26:04.278 "runtime": 2.004541, 00:26:04.278 "iops": 17609.01872298945, 00:26:04.278 "mibps": 68.78522938667754, 00:26:04.278 "io_failed": 0, 00:26:04.278 "io_timeout": 0, 00:26:04.278 "avg_latency_us": 7260.736457898722, 00:26:04.278 "min_latency_us": 4170.472727272727, 00:26:04.278 "max_latency_us": 17396.82909090909 00:26:04.278 } 00:26:04.278 ], 00:26:04.278 "core_count": 1 00:26:04.278 } 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:04.278 | select(.opcode=="crc32c") 00:26:04.278 | "\(.module_name) \(.executed)"' 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94185 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94185 ']' 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94185 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94185 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:04.278 killing process with pid 94185 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94185' 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94185 00:26:04.278 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.278 00:26:04.278 Latency(us) 00:26:04.278 [2024-12-06T13:20:10.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.278 [2024-12-06T13:20:10.806Z] =================================================================================================================== 00:26:04.278 [2024-12-06T13:20:10.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.278 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94185 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94262 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94262 /var/tmp/bperf.sock 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94262 ']' 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.577 13:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:04.577 [2024-12-06 13:20:10.931051] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:04.577 [2024-12-06 13:20:10.931633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94262 ] 00:26:04.577 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.577 Zero copy mechanism will not be used. 00:26:04.577 [2024-12-06 13:20:11.073399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.835 [2024-12-06 13:20:11.106923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.835 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.835 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:04.835 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:04.835 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:04.835 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:05.093 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.093 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.659 nvme0n1 00:26:05.659 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:05.659 13:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.659 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:05.659 Zero copy mechanism will not be used. 00:26:05.659 Running I/O for 2 seconds... 00:26:07.553 6898.00 IOPS, 862.25 MiB/s [2024-12-06T13:20:14.081Z] 6873.50 IOPS, 859.19 MiB/s 00:26:07.553 Latency(us) 00:26:07.553 [2024-12-06T13:20:14.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:07.553 nvme0n1 : 2.00 6870.74 858.84 0.00 0.00 2324.53 673.98 6374.87 00:26:07.553 [2024-12-06T13:20:14.081Z] =================================================================================================================== 00:26:07.553 [2024-12-06T13:20:14.081Z] Total : 6870.74 858.84 0.00 0.00 2324.53 673.98 6374.87 00:26:07.553 { 00:26:07.553 "results": [ 00:26:07.553 { 00:26:07.553 "job": "nvme0n1", 00:26:07.553 "core_mask": "0x2", 00:26:07.553 "workload": "randread", 00:26:07.553 "status": "finished", 00:26:07.553 "queue_depth": 16, 00:26:07.553 "io_size": 131072, 00:26:07.553 "runtime": 2.00357, 00:26:07.553 "iops": 6870.735736709973, 00:26:07.553 "mibps": 858.8419670887466, 00:26:07.553 "io_failed": 0, 00:26:07.553 "io_timeout": 0, 00:26:07.553 "avg_latency_us": 2324.5299063569005, 00:26:07.553 "min_latency_us": 673.9781818181818, 00:26:07.553 "max_latency_us": 6374.865454545455 00:26:07.553 } 00:26:07.553 ], 00:26:07.553 "core_count": 1 00:26:07.553 } 00:26:07.553 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:07.553 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:07.812 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:07.813 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:07.813 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:07.813 | select(.opcode=="crc32c") 00:26:07.813 | "\(.module_name) \(.executed)"' 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94262 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94262 ']' 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94262 00:26:08.071 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94262 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:08.072 killing process with pid 94262 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94262' 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94262 00:26:08.072 Received shutdown signal, test time was about 2.000000 seconds 00:26:08.072 00:26:08.072 Latency(us) 00:26:08.072 [2024-12-06T13:20:14.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.072 [2024-12-06T13:20:14.600Z] =================================================================================================================== 00:26:08.072 [2024-12-06T13:20:14.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94262 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94339 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94339 /var/tmp/bperf.sock 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94339 ']' 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.072 13:20:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.330 [2024-12-06 13:20:14.657032] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:08.330 [2024-12-06 13:20:14.657168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94339 ] 00:26:08.330 [2024-12-06 13:20:14.808587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.589 [2024-12-06 13:20:14.858487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.155 13:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.155 13:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:09.155 13:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.155 13:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.155 13:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:09.721 13:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.721 13:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.989 nvme0n1 00:26:09.989 13:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:09.990 13:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.990 Running I/O for 2 seconds... 00:26:12.295 18199.00 IOPS, 71.09 MiB/s [2024-12-06T13:20:18.823Z] 17015.00 IOPS, 66.46 MiB/s 00:26:12.295 Latency(us) 00:26:12.295 [2024-12-06T13:20:18.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.295 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:12.295 nvme0n1 : 2.01 17048.76 66.60 0.00 0.00 7499.29 2681.02 43134.60 00:26:12.295 [2024-12-06T13:20:18.823Z] =================================================================================================================== 00:26:12.295 [2024-12-06T13:20:18.823Z] Total : 17048.76 66.60 0.00 0.00 7499.29 2681.02 43134.60 00:26:12.295 { 00:26:12.295 "results": [ 00:26:12.295 { 00:26:12.295 "job": "nvme0n1", 00:26:12.295 "core_mask": "0x2", 00:26:12.295 "workload": "randwrite", 00:26:12.295 "status": "finished", 00:26:12.295 "queue_depth": 128, 00:26:12.295 "io_size": 4096, 00:26:12.295 "runtime": 2.01, 00:26:12.295 "iops": 17048.756218905473, 00:26:12.295 "mibps": 66.5967039800995, 00:26:12.295 "io_failed": 0, 00:26:12.295 "io_timeout": 0, 00:26:12.295 "avg_latency_us": 7499.290410348377, 00:26:12.295 "min_latency_us": 2681.018181818182, 00:26:12.295 "max_latency_us": 43134.60363636364 00:26:12.295 } 00:26:12.295 ], 00:26:12.295 "core_count": 1 00:26:12.295 } 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.295 | select(.opcode=="crc32c") 00:26:12.295 | "\(.module_name) \(.executed)"' 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94339 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94339 ']' 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94339 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94339 00:26:12.295 killing process with pid 94339 00:26:12.295 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.295 00:26:12.295 Latency(us) 00:26:12.295 [2024-12-06T13:20:18.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.295 [2024-12-06T13:20:18.823Z] =================================================================================================================== 00:26:12.295 [2024-12-06T13:20:18.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94339' 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94339 00:26:12.295 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94339 00:26:12.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94430 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94430 /var/tmp/bperf.sock 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94430 ']' 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.554 13:20:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.554 [2024-12-06 13:20:18.979786] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:12.554 [2024-12-06 13:20:18.979904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94430 ] 00:26:12.554 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:12.554 Zero copy mechanism will not be used. 00:26:12.811 [2024-12-06 13:20:19.124397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.812 [2024-12-06 13:20:19.157641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.812 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.812 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:12.812 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:12.812 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:12.812 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:13.070 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.070 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.634 nvme0n1 00:26:13.634 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:13.634 13:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:13.634 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:13.634 Zero copy mechanism will not be used. 00:26:13.634 Running I/O for 2 seconds... 00:26:15.936 6333.00 IOPS, 791.62 MiB/s [2024-12-06T13:20:22.464Z] 6335.00 IOPS, 791.88 MiB/s 00:26:15.936 Latency(us) 00:26:15.936 [2024-12-06T13:20:22.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.936 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:15.936 nvme0n1 : 2.00 6331.13 791.39 0.00 0.00 2521.37 1571.37 4230.05 00:26:15.936 [2024-12-06T13:20:22.464Z] =================================================================================================================== 00:26:15.936 [2024-12-06T13:20:22.464Z] Total : 6331.13 791.39 0.00 0.00 2521.37 1571.37 4230.05 00:26:15.936 { 00:26:15.936 "results": [ 00:26:15.936 { 00:26:15.936 "job": "nvme0n1", 00:26:15.936 "core_mask": "0x2", 00:26:15.936 "workload": "randwrite", 00:26:15.936 "status": "finished", 00:26:15.936 "queue_depth": 16, 00:26:15.936 "io_size": 131072, 00:26:15.936 "runtime": 2.004066, 00:26:15.936 "iops": 6331.128815118864, 00:26:15.936 "mibps": 791.391101889858, 00:26:15.936 "io_failed": 0, 00:26:15.936 "io_timeout": 0, 00:26:15.937 "avg_latency_us": 2521.3663922962282, 00:26:15.937 "min_latency_us": 1571.3745454545453, 00:26:15.937 "max_latency_us": 4230.050909090909 00:26:15.937 } 00:26:15.937 ], 00:26:15.937 "core_count": 1 00:26:15.937 } 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:15.937 | select(.opcode=="crc32c") 00:26:15.937 | "\(.module_name) \(.executed)"' 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94430 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94430 ']' 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94430 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94430 00:26:15.937 killing process with pid 94430 00:26:15.937 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.937 00:26:15.937 Latency(us) 00:26:15.937 [2024-12-06T13:20:22.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.937 [2024-12-06T13:20:22.465Z] =================================================================================================================== 00:26:15.937 [2024-12-06T13:20:22.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94430' 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94430 00:26:15.937 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94430 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94154 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94154 ']' 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94154 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94154 00:26:16.195 killing process with pid 94154 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94154' 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94154 00:26:16.195 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94154 00:26:16.453 00:26:16.453 real 0m16.036s 00:26:16.453 user 0m31.812s 00:26:16.453 sys 0m4.029s 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.453 ************************************ 00:26:16.453 END TEST nvmf_digest_clean 00:26:16.453 ************************************ 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:16.453 ************************************ 00:26:16.453 START TEST nvmf_digest_error 00:26:16.453 ************************************ 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94530 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94530 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94530 ']' 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.453 13:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.453 [2024-12-06 13:20:22.866303] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:16.453 [2024-12-06 13:20:22.866405] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.711 [2024-12-06 13:20:23.013563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.711 [2024-12-06 13:20:23.044669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.711 [2024-12-06 13:20:23.044726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.711 [2024-12-06 13:20:23.044738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.711 [2024-12-06 13:20:23.044747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.711 [2024-12-06 13:20:23.044754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.711 [2024-12-06 13:20:23.045077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.711 [2024-12-06 13:20:23.125466] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.711 null0 00:26:16.711 [2024-12-06 13:20:23.202974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.711 [2024-12-06 13:20:23.227088] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94555 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:16.711 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94555 /var/tmp/bperf.sock 00:26:16.976 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94555 ']' 00:26:16.976 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.976 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.976 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.976 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.976 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.976 [2024-12-06 13:20:23.310874] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:16.976 [2024-12-06 13:20:23.311035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94555 ] 00:26:16.976 [2024-12-06 13:20:23.467928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.242 [2024-12-06 13:20:23.517496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.242 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.242 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:17.242 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.242 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.501 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.501 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.501 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.501 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.501 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.501 13:20:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.068 nvme0n1 00:26:18.068 13:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:18.068 13:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.068 13:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.068 13:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.068 13:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:18.068 13:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.068 Running I/O for 2 seconds... 00:26:18.068 [2024-12-06 13:20:24.532497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.068 [2024-12-06 13:20:24.532578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.068 [2024-12-06 13:20:24.532594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.068 [2024-12-06 13:20:24.544482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.068 [2024-12-06 13:20:24.544536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.068 [2024-12-06 13:20:24.544551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.068 [2024-12-06 13:20:24.559370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.068 [2024-12-06 13:20:24.559413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.068 [2024-12-06 13:20:24.559427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.068 [2024-12-06 13:20:24.573583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.068 [2024-12-06 13:20:24.573626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.068 [2024-12-06 13:20:24.573640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.068 [2024-12-06 13:20:24.587737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.068 [2024-12-06 13:20:24.587780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.068 [2024-12-06 13:20:24.587795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.601890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.601937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.601968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.614292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.614364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.614381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.629652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.629723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.629739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.644018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.644097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.644112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.660825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.660899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.660914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.675182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.675259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.675275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.689514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.689797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.689819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.704151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.704207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.704223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.719894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.720213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.720236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.737504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.737577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.737594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.754758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.754828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.754845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.773900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.773998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.774015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.789930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.790002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.790018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.802226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.802302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.802318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.817287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.817352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.817369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.831995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.832050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.832066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.328 [2024-12-06 13:20:24.847062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.328 [2024-12-06 13:20:24.847293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.328 [2024-12-06 13:20:24.847314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.588 [2024-12-06 13:20:24.861605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.588 [2024-12-06 13:20:24.861657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.588 [2024-12-06 13:20:24.861672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.588 [2024-12-06 13:20:24.878150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.878352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.878372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:24.893502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.893686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.893716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:24.911182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.911466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.911708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:24.926452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.926680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.926819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:24.943594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.943800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.943969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:24.959882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.960101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.960294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:24.975773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.976036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.976278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:24.990938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:24.991166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:24.991374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.003965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.004191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.004301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.019318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.019536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.034342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.034517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.034536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.048814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.048865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.048881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.062958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.063001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.063016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.077415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.077588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.077609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.091936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.092131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.092277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.589 [2024-12-06 13:20:25.106651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.589 [2024-12-06 13:20:25.106834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.589 [2024-12-06 13:20:25.106998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.122029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.122219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.122414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.137236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.137422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.137561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.152484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.152668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.152847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.167412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.167600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.167750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.182287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.182473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.182659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.197054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.197233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.197252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.211938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.211999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.212014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.226688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.226733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.226749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.240881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.240939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.240971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.254101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.254143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.254158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.268286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.268329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.268344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.282681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.282726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.282740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.297614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.297662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.297676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.312315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.312364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.312380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.326622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.326673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.326689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.341803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.341850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.341866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.356235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.356279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.356295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.848 [2024-12-06 13:20:25.368610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:18.848 [2024-12-06 13:20:25.368654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.848 [2024-12-06 13:20:25.368668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.383961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.384006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.384021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.398559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.398607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.398622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.413483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.413531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.413547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.427709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.427896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.427915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.442534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.442584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.442599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.457017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.457089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.457106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.472668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.472741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.472759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.487025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.487097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.487114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.501389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.501463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.501480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 16944.00 IOPS, 66.19 MiB/s [2024-12-06T13:20:25.635Z] [2024-12-06 13:20:25.517448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.517517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.517532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.530860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.531184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.531206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.544615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.544689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.544705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.558324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.558605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.558627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.574051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.574364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.574593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.589496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.589735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.589866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.605976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.606160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.606298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.107 [2024-12-06 13:20:25.620425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.107 [2024-12-06 13:20:25.620619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.107 [2024-12-06 13:20:25.620839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.635469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.635661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.635803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.651534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.651718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.651860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.666186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.666398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.666570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.679639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.679929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.680088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.694184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.694227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.694241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.709028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.709069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.709084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.723694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.723740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.723755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.737539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.737586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.737602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.751723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.751770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.751785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.766346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.766392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.766407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.780816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.781071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.781092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.795214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.795257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.795272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.809603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.809644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.809659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.824596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.824753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.824773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.835417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.835574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.835594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.851334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.851377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.851392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.865763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.865805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.865820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.366 [2024-12-06 13:20:25.880871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.366 [2024-12-06 13:20:25.880914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.366 [2024-12-06 13:20:25.880928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.894587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.894745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.894763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.909233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.909391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.909410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.923303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.923347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.923362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.938972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.939011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.939026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.954576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.954623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.954639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.968806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.969065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.969086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.981344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.981393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.981408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:25.996276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:25.996440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:25.996459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.010738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.010781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.010796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.024834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.024886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.039594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.039847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.039869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.054146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.054187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.054202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.068318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.068359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.068374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.082476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.082518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.082534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.097156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.097199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.097214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.112093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.112134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.112149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.126305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.126345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.126359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.625 [2024-12-06 13:20:26.140931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.625 [2024-12-06 13:20:26.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.625 [2024-12-06 13:20:26.140998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.155199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.155369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.155388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.167716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.167759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.167774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.181906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.181961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.181977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.195817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.195858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.195872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.210722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.210965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.210987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.225240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.225420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.225558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.239786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.239993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.240144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.254995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.255040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.255055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.266860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.267120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.267141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.282318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.282497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.282669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.298197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.298445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.298688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.313046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.313225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.313396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.327490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.327670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.327810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.342372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.342557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.342690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.357312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.357496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.357629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.372700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.372898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.373114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.385694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.385893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.386045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.884 [2024-12-06 13:20:26.400572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:19.884 [2024-12-06 13:20:26.400765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.884 [2024-12-06 13:20:26.400997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 [2024-12-06 13:20:26.415670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:20.143 [2024-12-06 13:20:26.415730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.143 [2024-12-06 13:20:26.415746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 [2024-12-06 13:20:26.430825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:20.143 [2024-12-06 13:20:26.430867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.143 [2024-12-06 13:20:26.430899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 [2024-12-06 13:20:26.445557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:20.143 [2024-12-06 13:20:26.445598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.143 [2024-12-06 13:20:26.445614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 [2024-12-06 13:20:26.459698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:20.143 [2024-12-06 13:20:26.459873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.143 [2024-12-06 13:20:26.459892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 [2024-12-06 13:20:26.475578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:20.143 [2024-12-06 13:20:26.475619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.143 [2024-12-06 13:20:26.475650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 [2024-12-06 13:20:26.491062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:20.143 [2024-12-06 13:20:26.491125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.143 [2024-12-06 13:20:26.491164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 [2024-12-06 13:20:26.505367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa29070) 00:26:20.143 [2024-12-06 13:20:26.505408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.143 [2024-12-06 13:20:26.505423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.143 17241.00 IOPS, 67.35 MiB/s 00:26:20.143 Latency(us) 00:26:20.143 [2024-12-06T13:20:26.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.143 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:20.143 nvme0n1 : 2.01 17256.99 67.41 0.00 0.00 7409.54 4200.26 22758.87 00:26:20.143 [2024-12-06T13:20:26.671Z] =================================================================================================================== 00:26:20.143 [2024-12-06T13:20:26.671Z] Total : 17256.99 67.41 0.00 0.00 7409.54 4200.26 22758.87 00:26:20.143 { 00:26:20.143 "results": [ 00:26:20.143 { 00:26:20.143 "job": "nvme0n1", 00:26:20.143 "core_mask": "0x2", 00:26:20.143 "workload": "randread", 00:26:20.143 "status": "finished", 00:26:20.143 "queue_depth": 128, 00:26:20.143 "io_size": 4096, 00:26:20.143 "runtime": 2.005564, 00:26:20.143 "iops": 17256.991050896406, 00:26:20.143 "mibps": 67.41012129256409, 00:26:20.143 "io_failed": 0, 00:26:20.143 "io_timeout": 0, 00:26:20.143 "avg_latency_us": 7409.540596044233, 00:26:20.143 "min_latency_us": 4200.261818181818, 00:26:20.143 "max_latency_us": 22758.865454545456 00:26:20.143 } 00:26:20.143 ], 00:26:20.143 "core_count": 1 00:26:20.143 } 00:26:20.143 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:20.143 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:20.143 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:20.143 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:20.143 | .driver_specific 00:26:20.143 | .nvme_error 00:26:20.143 | .status_code 00:26:20.143 | .command_transient_transport_error' 00:26:20.406 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:26:20.406 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94555 00:26:20.406 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94555 ']' 00:26:20.406 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94555 00:26:20.406 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:20.406 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.406 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94555 00:26:20.406 killing process with pid 94555 00:26:20.406 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.406 00:26:20.406 Latency(us) 00:26:20.406 [2024-12-06T13:20:26.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.406 [2024-12-06T13:20:26.935Z] =================================================================================================================== 00:26:20.407 [2024-12-06T13:20:26.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.407 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.407 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.407 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94555' 00:26:20.407 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94555 00:26:20.407 13:20:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94555 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94632 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94632 /var/tmp/bperf.sock 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94632 ']' 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.665 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.665 [2024-12-06 13:20:27.070160] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:20.665 [2024-12-06 13:20:27.070490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94632 ] 00:26:20.665 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.665 Zero copy mechanism will not be used. 00:26:20.924 [2024-12-06 13:20:27.216043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.924 [2024-12-06 13:20:27.248884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.924 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.924 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:20.924 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.924 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.182 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:21.182 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.182 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.182 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.182 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.182 13:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.750 nvme0n1 00:26:21.750 13:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:21.750 13:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.750 13:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.750 13:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.750 13:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:21.750 13:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.750 Zero copy mechanism will not be used. 00:26:21.750 Running I/O for 2 seconds... 00:26:21.750 [2024-12-06 13:20:28.139151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.139218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.139234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.144311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.144358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.144374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.147604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.147645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.147659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.151845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.151888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.151902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.156918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.156972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.156987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.162086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.162127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.162141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.165588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.165628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.165642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.169657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.169699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.169712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.173150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.173191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.173204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.176925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.176979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.176993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.181310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.181352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.181365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.184297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.184337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.184351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.188639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.188681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.188695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.192382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.192423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.192437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.196360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.196401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.196414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.200249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.200291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.200305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.204139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.204180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.204193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.208023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.208063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.208077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.212363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.212404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.212418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.215923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.215973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.215987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.219481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.219522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.219535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.223882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.223923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.223936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.750 [2024-12-06 13:20:28.228902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.750 [2024-12-06 13:20:28.228958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.750 [2024-12-06 13:20:28.228974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.232471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.232510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.232524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.236704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.236745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.236760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.241484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.241525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.241539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.245526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.245565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.245578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.249832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.249873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.249887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.253229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.253271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.253285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.258064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.258105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.258118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.262873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.262913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.262927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.267414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.267455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.267468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.751 [2024-12-06 13:20:28.270655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:21.751 [2024-12-06 13:20:28.270695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.751 [2024-12-06 13:20:28.270708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.011 [2024-12-06 13:20:28.275283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.011 [2024-12-06 13:20:28.275328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.011 [2024-12-06 13:20:28.275345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.011 [2024-12-06 13:20:28.280067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.280112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.280126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.284326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.284369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.284383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.287879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.287922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.287937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.291616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.291658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.291672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.295882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.295923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.295937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.299194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.299235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.299249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.303536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.303578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.303592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.308131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.308172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.308186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.311651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.311695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.311709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.315956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.315995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.316009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.320936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.320988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.321002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.325312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.325356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.325370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.328243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.328283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.328296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.333061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.333102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.333115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.337456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.337498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.337512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.340810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.340851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.340865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.345491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.345537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.345552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.350326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.350371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.350386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.355173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.355223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.355237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.359960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.360002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.360016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.363463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.363504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.363518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.367901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.367956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.367972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.371508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.371550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.371563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.374789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.374829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.374843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.378778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.378819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.378833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.382773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.012 [2024-12-06 13:20:28.382814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.012 [2024-12-06 13:20:28.382828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.012 [2024-12-06 13:20:28.386023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.386061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.386075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.390252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.390294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.390308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.393472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.393512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.393526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.397351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.397393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.397406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.400936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.400987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.401002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.405070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.405112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.405126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.408676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.408721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.408736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.413398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.413440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.413454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.418260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.418303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.418316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.421692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.421734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.421748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.425885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.425924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.425938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.429106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.429146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.429160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.432962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.432998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.433012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.436633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.436672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.436686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.440357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.440396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.440410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.444722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.444761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.444774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.448184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.448223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.448237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.452354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.452393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.452407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.456859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.456901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.456915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.461226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.461267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.461282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.464752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.464794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.464808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.468865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.468905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.468919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.472662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.472702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.472717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.476875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.476915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.476929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.480640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.480679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.480693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.484872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.484911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.484925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.488424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.013 [2024-12-06 13:20:28.488462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.013 [2024-12-06 13:20:28.488476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.013 [2024-12-06 13:20:28.492219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.492257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.492271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.496615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.496655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.496669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.501268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.501306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.501320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.504389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.504427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.504440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.509043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.509081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.509095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.513460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.513498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.513512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.516320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.516359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.516372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.520930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.520983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.520998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.525151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.525189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.525202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.528336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.528377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.528390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.014 [2024-12-06 13:20:28.533127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.014 [2024-12-06 13:20:28.533171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.014 [2024-12-06 13:20:28.533186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.538203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.538248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.538263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.542746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.542800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.542822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.545607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.545650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.545664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.550184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.550227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.550241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.553803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.553844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.553858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.557612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.557654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.557668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.561770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.561815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.561829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.565088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.565129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.565143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.569264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.569305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.569319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.573543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.573586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.573600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.576846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.576888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.576902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.580697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.580737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.275 [2024-12-06 13:20:28.580751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.275 [2024-12-06 13:20:28.584449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.275 [2024-12-06 13:20:28.584489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.584502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.588529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.588570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.588583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.592484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.592525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.592539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.596268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.596309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.596323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.600142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.600183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.600197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.603870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.603915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.603929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.608384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.608428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.608443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.612320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.612363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.612378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.616221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.616264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.616278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.619616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.619657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.619670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.623675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.623717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.623730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.628708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.628750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.628764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.632272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.632313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.632327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.636525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.636567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.636581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.641257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.641299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.641314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.645557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.645597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.645611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.650377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.650418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.650432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.653394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.653435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.653449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.657843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.657884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.657898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.663091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.663133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.663147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.666669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.666709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.666723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.670791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.670832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.670846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.675641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.675687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.675702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.679145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.679196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.679210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.682303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.682346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.682361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.686807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.686851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.686865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.692128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.276 [2024-12-06 13:20:28.692171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.276 [2024-12-06 13:20:28.692186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.276 [2024-12-06 13:20:28.697299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.697341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.697356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.700853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.700895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.700909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.705332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.705375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.705389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.710198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.710241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.710254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.714659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.714700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.714714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.717779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.717819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.717832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.721934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.721988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.722002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.726007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.726048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.726062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.730081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.730121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.730134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.733724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.733764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.733778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.737385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.737425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.737439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.742276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.742318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.742331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.745551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.745591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.745606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.749607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.749648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.749661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.754270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.754312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.757616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.757656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.757670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.761666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.761707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.761721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.765817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.765858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.765872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.769201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.769240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.769254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.773377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.773418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.773432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.777912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.777967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.777982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.781327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.781366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.781380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.785658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.785698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.785711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.790189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.790230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.790244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.793889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.793929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.793956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.277 [2024-12-06 13:20:28.797842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.277 [2024-12-06 13:20:28.797887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.277 [2024-12-06 13:20:28.797901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.802103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.802147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.802162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.805525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.805565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.805579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.808709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.808753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.808768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.813087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.813128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.813142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.817533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.817576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.817589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.820526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.820565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.820578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.824691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.824732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.824746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.829600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.829642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.829656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.832934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.832987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.833001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.837178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.837219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.837233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.841228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.841269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.841282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.844328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.844370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.538 [2024-12-06 13:20:28.844383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.538 [2024-12-06 13:20:28.849145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.538 [2024-12-06 13:20:28.849185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.849198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.853506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.853547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.853561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.856684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.856724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.856737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.861031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.861088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.861109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.866183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.866228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.866243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.871114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.871168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.871185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.874190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.874230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.874244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.879241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.879282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.879297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.884300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.884342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.884356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.887869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.887911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.887925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.892246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.892287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.892301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.897367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.897413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.897427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.902721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.902774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.902788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.906337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.906378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.906391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.910691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.910733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.910746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.915792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.915834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.915848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.920645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.920687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.920700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.923986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.924025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.924038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.928001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.928044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.928058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.932459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.932502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.932516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.935933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.935985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.935999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.939289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.939328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.939341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.942633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.942673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.942686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.946846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.946888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.946901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.951446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.951486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.951500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.955298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.955339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.955352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.958543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.958582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.539 [2024-12-06 13:20:28.958595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.539 [2024-12-06 13:20:28.962718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.539 [2024-12-06 13:20:28.962761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.962775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.966745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.966785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.966798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.970041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.970081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.970095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.974116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.974157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.974171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.977788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.977831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.977845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.981862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.981903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.981916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.985556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.985600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.985613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.989010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.989049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.989062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.993250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.993292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.993305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:28.996607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:28.996648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:28.996661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.000934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.000989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.001003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.006057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.006099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.006112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.009237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.009276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.009290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.013372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.013414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.013427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.018326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.018368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.018381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.022694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.022734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.022748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.025603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.025643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.025656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.029736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.029776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.029790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.034012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.034053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.034066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.038054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.038098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.038112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.042000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.042044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.042058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.046132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.046180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.046194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.050091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.050135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.050150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.053969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.054009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.054023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.057231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.057273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.057287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.540 [2024-12-06 13:20:29.061743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.540 [2024-12-06 13:20:29.061787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.540 [2024-12-06 13:20:29.061802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.801 [2024-12-06 13:20:29.066677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.801 [2024-12-06 13:20:29.066721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.066736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.070332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.070376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.070390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.074666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.074709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.074723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.079695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.079738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.079752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.084347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.084389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.084403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.087063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.087101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.087114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.091808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.091851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.091864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.094749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.094789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.094803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.098027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.098067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.098081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.102033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.102074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.102088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.105935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.105991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.106005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.109468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.109526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.109548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.114279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.114324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.114338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.118824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.118868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.118882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.122390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.122434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.122449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.127183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.127228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.127242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.131539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.131581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.131596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.135108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.135150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.135177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.802 7600.00 IOPS, 950.00 MiB/s [2024-12-06T13:20:29.330Z] [2024-12-06 13:20:29.140909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.140965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.140981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.144493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.144535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.144549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.148785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.148826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.148840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.153464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.153506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.153519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.802 [2024-12-06 13:20:29.156313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.802 [2024-12-06 13:20:29.156355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.802 [2024-12-06 13:20:29.156369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.161754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.161801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.161815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.166698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.166743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.166758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.169665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.169706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.169719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.174063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.174104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.174117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.178577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.178620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.178633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.182726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.182767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.182781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.185753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.185794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.185808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.190129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.190170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.190183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.194555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.194597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.194612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.198067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.198107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.198120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.202215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.202257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.202271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.206733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.206776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.206790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.210362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.210403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.210416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.214904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.214957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.214973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.219882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.219924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.219938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.224678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.224720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.224734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.228277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.228316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.228329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.232572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.232614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.232628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.237591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.237633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.237646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.241819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.241860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.241874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.245109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.245149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.245164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.249423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.249463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.249477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.253233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.253274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.253288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.257233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.257274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.803 [2024-12-06 13:20:29.257288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.803 [2024-12-06 13:20:29.261110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.803 [2024-12-06 13:20:29.261151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.261165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.264815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.264855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.264868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.269254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.269297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.269310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.272688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.272730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.272744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.277143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.277184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.277197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.282045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.282087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.282101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.286697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.286739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.286753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.290304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.290346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.290360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.294633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.294675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.294688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.299504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.299545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.299559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.303831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.303871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.303885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.308675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.308716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.308731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.313325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.313366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.313380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.316348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.316388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.316402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.804 [2024-12-06 13:20:29.321438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:22.804 [2024-12-06 13:20:29.321480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.804 [2024-12-06 13:20:29.321493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.065 [2024-12-06 13:20:29.326687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.065 [2024-12-06 13:20:29.326733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.065 [2024-12-06 13:20:29.326748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.065 [2024-12-06 13:20:29.331471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.065 [2024-12-06 13:20:29.331514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.065 [2024-12-06 13:20:29.331528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.065 [2024-12-06 13:20:29.336241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.065 [2024-12-06 13:20:29.336286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.065 [2024-12-06 13:20:29.336300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.065 [2024-12-06 13:20:29.339038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.065 [2024-12-06 13:20:29.339075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.339088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.344016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.344059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.344073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.348638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.348681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.348694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.351408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.351449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.351462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.356214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.356256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.356269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.361014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.361055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.361069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.365802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.365854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.365868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.368686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.368738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.368751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.373811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.373854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.373868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.378571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.381836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.381884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.381899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.386582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.386626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.386641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.391248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.391291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.391305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.396289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.396333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.396347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.401236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.401280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.401294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.405618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.405661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.405676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.408326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.408368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.408381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.413313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.413355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.413369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.418405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.418449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.418463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.422793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.422839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.422853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.426261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.426317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.426333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.429856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.429901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.429915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.433170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.433212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.433226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.437508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.437549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.437563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.442495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.442537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.442551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.447172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.447213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.447228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.449926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.449974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.066 [2024-12-06 13:20:29.449987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.066 [2024-12-06 13:20:29.454772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.066 [2024-12-06 13:20:29.454814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.454828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.459539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.459580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.459594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.462379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.462419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.462432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.467249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.467291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.467306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.471955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.471995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.474735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.474774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.474788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.479817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.479859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.479872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.484240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.484295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.486954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.486991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.487005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.491933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.491988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.492002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.495407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.495447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.495461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.499588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.499630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.499643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.504016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.504058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.504072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.507235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.507275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.507289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.511887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.511932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.511960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.516873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.516915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.516928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.520440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.520481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.520494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.524766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.524807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.524821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.529693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.529735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.529749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.534419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.534460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.534474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.537210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.537249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.067 [2024-12-06 13:20:29.537262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.067 [2024-12-06 13:20:29.542124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.067 [2024-12-06 13:20:29.542166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.542180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.545619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.545660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.545673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.549643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.549691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.549704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.553830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.553871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.553885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.557978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.558019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.558032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.561709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.561751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.561765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.565318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.565358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.565372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.569201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.569242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.569256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.572860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.572901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.572915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.576859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.576900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.576913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.580783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.580824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.580838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.585412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.585461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.585479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.068 [2024-12-06 13:20:29.589331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.068 [2024-12-06 13:20:29.589375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.068 [2024-12-06 13:20:29.589390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.593332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.593377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.593392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.597448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.597494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.597508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.601237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.601280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.601295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.604586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.604627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.604641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.608078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.608120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.608134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.612123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.612164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.612177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.617034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.617077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.617091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.620444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.620486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.620499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.624778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.624819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.624833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.629546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.629588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.629602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.634050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.634109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.634123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.636916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.636978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.636993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.641802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.641869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.641886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.647188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.647258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.647274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.651201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.651254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.651270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.654862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.654917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.654932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.659182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.659247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.659263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.662967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.663012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.663026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.666864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.666919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.666934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.671321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.671375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.671390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.675426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.675483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.675498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.679663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.679723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.679737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.330 [2024-12-06 13:20:29.683682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.330 [2024-12-06 13:20:29.683742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.330 [2024-12-06 13:20:29.683757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.687613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.687670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.687686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.691308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.691362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.691377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.695806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.695873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.695887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.700638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.700706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.700723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.704039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.704096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.704112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.708822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.708889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.708905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.714662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.714748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.720177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.720257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.725267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.725337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.725352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.729970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.730029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.730044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.733803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.733860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.733874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.737650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.737705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.737719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.742027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.742089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.742105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.745665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.745719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.745733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.750009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.750064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.750079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.754255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.754305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.754320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.758681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.758723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.758736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.761773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.761813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.761826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.766149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.766192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.766206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.769563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.769604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.769618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.773592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.773632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.773645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.777143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.777185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.777199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.780563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.780604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.780617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.784500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.784542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.784556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.788418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.788464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.788478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.792022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.792066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.792080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.331 [2024-12-06 13:20:29.796182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.331 [2024-12-06 13:20:29.796226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.331 [2024-12-06 13:20:29.796240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.800096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.800138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.800152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.803666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.803707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.803722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.807998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.808039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.808053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.811613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.811655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.811669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.815744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.815786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.815800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.819442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.819483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.819497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.823830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.823872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.823886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.826577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.826616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.826629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.830894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.830936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.830964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.834987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.835026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.835040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.838543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.838583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.838597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.842313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.842352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.842366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.846746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.846786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.846800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.332 [2024-12-06 13:20:29.850518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.332 [2024-12-06 13:20:29.850563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.332 [2024-12-06 13:20:29.850578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.592 [2024-12-06 13:20:29.853864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.592 [2024-12-06 13:20:29.853915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.592 [2024-12-06 13:20:29.853938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.592 [2024-12-06 13:20:29.858686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.592 [2024-12-06 13:20:29.858732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.592 [2024-12-06 13:20:29.858746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.592 [2024-12-06 13:20:29.863839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.592 [2024-12-06 13:20:29.863884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.863898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.869045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.869088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.869102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.872216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.872255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.872268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.876468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.876510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.876524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.880192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.880234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.880247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.883792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.883834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.883847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.887846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.887889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.887903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.891623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.891665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.891679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.894811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.894852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.894866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.899503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.899548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.899563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.903988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.904034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.904049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.907453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.907495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.907509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.911599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.911642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.911656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.915312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.915355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.915369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.919452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.919494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.919508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.923498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.923540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.923554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.927029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.927069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.927082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.930772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.930814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.930828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.934886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.934928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.934955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.938623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.938664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.938678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.942425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.942466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.942480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.945863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.945903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.945918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.950439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.950482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.950495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.955430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.955471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.955485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.959844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.959888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.959903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.962569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.962607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.962621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.593 [2024-12-06 13:20:29.967543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.593 [2024-12-06 13:20:29.967585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.593 [2024-12-06 13:20:29.967599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.971704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.971745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.971759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.975086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.975126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.975140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.979853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.979894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.979907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.982911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.982961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.982977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.987113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.987153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.987176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.992063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.992105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.992119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.996716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.996757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.996770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:29.999403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:29.999443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:29.999457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.005245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.005311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.005337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.012355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.012438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.019493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.019550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.019572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.026251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.026313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.026335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.030735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.030796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.030819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.035866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.035934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.035973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.041131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.041174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.041189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.045376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.045418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.045432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.048937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.048999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.049014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.053973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.054018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.054033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.057239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.057281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.057294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.061656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.061701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.061715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.065649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.065694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.065708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.069207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.069249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.069264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.073200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.073242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.073255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.077093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.077139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.077153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.080800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.080858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.084982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.085022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.085035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.088986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.594 [2024-12-06 13:20:30.089027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.594 [2024-12-06 13:20:30.089042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.594 [2024-12-06 13:20:30.092249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.595 [2024-12-06 13:20:30.092290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.595 [2024-12-06 13:20:30.092304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.595 [2024-12-06 13:20:30.096465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.595 [2024-12-06 13:20:30.096506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.595 [2024-12-06 13:20:30.096520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.595 [2024-12-06 13:20:30.100652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.595 [2024-12-06 13:20:30.100693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.595 [2024-12-06 13:20:30.100707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.595 [2024-12-06 13:20:30.103679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.595 [2024-12-06 13:20:30.103720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.595 [2024-12-06 13:20:30.103734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.595 [2024-12-06 13:20:30.108692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.595 [2024-12-06 13:20:30.108737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.595 [2024-12-06 13:20:30.108751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.595 [2024-12-06 13:20:30.113383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.595 [2024-12-06 13:20:30.113429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.595 [2024-12-06 13:20:30.113443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.595 [2024-12-06 13:20:30.116424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.595 [2024-12-06 13:20:30.116468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.595 [2024-12-06 13:20:30.116483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.854 [2024-12-06 13:20:30.121519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.854 [2024-12-06 13:20:30.121565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.854 [2024-12-06 13:20:30.121579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.854 [2024-12-06 13:20:30.126583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.854 [2024-12-06 13:20:30.126631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.854 [2024-12-06 13:20:30.126645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.854 [2024-12-06 13:20:30.130875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.854 [2024-12-06 13:20:30.130920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.854 [2024-12-06 13:20:30.130934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.854 [2024-12-06 13:20:30.135634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1339dd0) 00:26:23.854 [2024-12-06 13:20:30.135677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.854 [2024-12-06 13:20:30.135691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.854 7509.00 IOPS, 938.62 MiB/s 00:26:23.854 Latency(us) 00:26:23.854 [2024-12-06T13:20:30.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.854 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:23.854 nvme0n1 : 2.00 7504.69 938.09 0.00 0.00 2127.92 659.08 7596.22 00:26:23.854 [2024-12-06T13:20:30.382Z] =================================================================================================================== 00:26:23.854 [2024-12-06T13:20:30.382Z] Total : 7504.69 938.09 0.00 0.00 2127.92 659.08 7596.22 00:26:23.854 { 00:26:23.854 "results": [ 00:26:23.854 { 00:26:23.854 "job": "nvme0n1", 00:26:23.854 "core_mask": "0x2", 00:26:23.854 "workload": "randread", 00:26:23.854 "status": "finished", 00:26:23.854 "queue_depth": 16, 00:26:23.854 "io_size": 131072, 00:26:23.854 "runtime": 2.003414, 00:26:23.854 "iops": 7504.68949503198, 00:26:23.854 "mibps": 938.0861868789975, 00:26:23.854 "io_failed": 0, 00:26:23.854 "io_timeout": 0, 00:26:23.854 "avg_latency_us": 2127.9227093146296, 00:26:23.854 "min_latency_us": 659.0836363636364, 00:26:23.854 "max_latency_us": 7596.218181818182 00:26:23.854 } 00:26:23.854 ], 00:26:23.854 "core_count": 1 00:26:23.854 } 00:26:23.854 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.854 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.854 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.854 | .driver_specific 00:26:23.854 | .nvme_error 00:26:23.854 | .status_code 00:26:23.854 | .command_transient_transport_error' 00:26:23.854 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 485 > 0 )) 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94632 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94632 ']' 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94632 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94632 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.113 killing process with pid 94632 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94632' 00:26:24.113 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.113 00:26:24.113 Latency(us) 00:26:24.113 [2024-12-06T13:20:30.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.113 [2024-12-06T13:20:30.641Z] =================================================================================================================== 00:26:24.113 [2024-12-06T13:20:30.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94632 00:26:24.113 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94632 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94703 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94703 /var/tmp/bperf.sock 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94703 ']' 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.372 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.372 [2024-12-06 13:20:30.716835] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:24.372 [2024-12-06 13:20:30.716986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94703 ] 00:26:24.372 [2024-12-06 13:20:30.866175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.631 [2024-12-06 13:20:30.904812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.631 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.631 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:24.631 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.631 13:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.889 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.889 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.889 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.889 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.889 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.889 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.148 nvme0n1 00:26:25.148 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:25.148 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.148 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.407 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.407 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.407 13:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.407 Running I/O for 2 seconds... 00:26:25.407 [2024-12-06 13:20:31.822735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eee190 00:26:25.407 [2024-12-06 13:20:31.824192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.824238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.835034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efdeb0 00:26:25.407 [2024-12-06 13:20:31.836462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.836499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.846560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef4298 00:26:25.407 [2024-12-06 13:20:31.847859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.847898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.858111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef7538 00:26:25.407 [2024-12-06 13:20:31.859231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.859269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.869671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee8d30 00:26:25.407 [2024-12-06 13:20:31.870624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.870661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.881139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeb760 00:26:25.407 [2024-12-06 13:20:31.881906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.881950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.894745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef46d0 00:26:25.407 [2024-12-06 13:20:31.896048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.896086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.909257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee8d30 00:26:25.407 [2024-12-06 13:20:31.911234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.911276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.917882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eed920 00:26:25.407 [2024-12-06 13:20:31.918868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.918905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.407 [2024-12-06 13:20:31.930230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef5be8 00:26:25.407 [2024-12-06 13:20:31.931235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.407 [2024-12-06 13:20:31.931273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:31.941786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef46d0 00:26:25.665 [2024-12-06 13:20:31.942627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:31.942664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:31.956032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee95a0 00:26:25.665 [2024-12-06 13:20:31.957071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:31.957109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:31.967463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee3060 00:26:25.665 [2024-12-06 13:20:31.968313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:31.968350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:31.978876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efbcf0 00:26:25.665 [2024-12-06 13:20:31.979607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:31.979643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:31.993730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee0ea0 00:26:25.665 [2024-12-06 13:20:31.995742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:31.995782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:32.002340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eed0b0 00:26:25.665 [2024-12-06 13:20:32.003365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:32.003401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:32.016767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee3498 00:26:25.665 [2024-12-06 13:20:32.018476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:32.018512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:32.028006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee7818 00:26:25.665 [2024-12-06 13:20:32.029398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:32.029434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:32.039831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef81e0 00:26:25.665 [2024-12-06 13:20:32.041233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:32.041268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.665 [2024-12-06 13:20:32.051076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee0ea0 00:26:25.665 [2024-12-06 13:20:32.052159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.665 [2024-12-06 13:20:32.052198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.062824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ede8a8 00:26:25.666 [2024-12-06 13:20:32.063925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.063969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.077286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efa7d8 00:26:25.666 [2024-12-06 13:20:32.079074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.079110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.085847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee5ec8 00:26:25.666 [2024-12-06 13:20:32.086639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.086674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.100287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eee190 00:26:25.666 [2024-12-06 13:20:32.101756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.101794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.111489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eef270 00:26:25.666 [2024-12-06 13:20:32.112650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.112686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.123219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee99d8 00:26:25.666 [2024-12-06 13:20:32.124392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.124427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.137316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef4b08 00:26:25.666 [2024-12-06 13:20:32.138830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.138871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.148850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee88f8 00:26:25.666 [2024-12-06 13:20:32.150241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.150277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.160312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef4f40 00:26:25.666 [2024-12-06 13:20:32.161511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.161548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.171862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef57b0 00:26:25.666 [2024-12-06 13:20:32.172935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.172988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.666 [2024-12-06 13:20:32.183347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efe720 00:26:25.666 [2024-12-06 13:20:32.184223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.666 [2024-12-06 13:20:32.184259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.194867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef6cc8 00:26:25.924 [2024-12-06 13:20:32.195634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.195674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.210098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef1868 00:26:25.924 [2024-12-06 13:20:32.211847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.211887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.218902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee4140 00:26:25.924 [2024-12-06 13:20:32.219798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.219834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.233367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee4578 00:26:25.924 [2024-12-06 13:20:32.234929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.234975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.244614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeaef0 00:26:25.924 [2024-12-06 13:20:32.245887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.245924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.256372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee5658 00:26:25.924 [2024-12-06 13:20:32.257633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.257667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.270831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eed920 00:26:25.924 [2024-12-06 13:20:32.272794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.272830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.279402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef2948 00:26:25.924 [2024-12-06 13:20:32.280383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.280437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.294504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee23b8 00:26:25.924 [2024-12-06 13:20:32.296176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.296219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.305815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efc998 00:26:25.924 [2024-12-06 13:20:32.307254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.307294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.317671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee7818 00:26:25.924 [2024-12-06 13:20:32.319045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.924 [2024-12-06 13:20:32.319080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.924 [2024-12-06 13:20:32.332185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016edf118 00:26:25.925 [2024-12-06 13:20:32.334233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.334273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.340819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef5378 00:26:25.925 [2024-12-06 13:20:32.341881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.341922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.355384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef7538 00:26:25.925 [2024-12-06 13:20:32.357158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.357205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.364198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee0a68 00:26:25.925 [2024-12-06 13:20:32.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.364989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.378655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee5220 00:26:25.925 [2024-12-06 13:20:32.380158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.380203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.390605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee12d8 00:26:25.925 [2024-12-06 13:20:32.391916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.391969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.402048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efe720 00:26:25.925 [2024-12-06 13:20:32.403192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.403233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.413678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eee5c8 00:26:25.925 [2024-12-06 13:20:32.414681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.414718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.426050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee5220 00:26:25.925 [2024-12-06 13:20:32.426654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.426690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.925 [2024-12-06 13:20:32.439970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eebfd0 00:26:25.925 [2024-12-06 13:20:32.441428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.925 [2024-12-06 13:20:32.441465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.451115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef6cc8 00:26:26.183 [2024-12-06 13:20:32.452553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.452594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.462969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eebfd0 00:26:26.183 [2024-12-06 13:20:32.464290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.464329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.475093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eea680 00:26:26.183 [2024-12-06 13:20:32.475917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.475969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.486657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef1ca0 00:26:26.183 [2024-12-06 13:20:32.487382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.487419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.500462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef6458 00:26:26.183 [2024-12-06 13:20:32.501991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.502028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.511582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef8e88 00:26:26.183 [2024-12-06 13:20:32.512939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.512990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.523337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef5be8 00:26:26.183 [2024-12-06 13:20:32.524673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.524708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.537766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eec840 00:26:26.183 [2024-12-06 13:20:32.539807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.183 [2024-12-06 13:20:32.539843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.183 [2024-12-06 13:20:32.546372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef3a28 00:26:26.183 [2024-12-06 13:20:32.547418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.547453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.560839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef1430 00:26:26.184 [2024-12-06 13:20:32.562412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.562447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.570466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef4f40 00:26:26.184 [2024-12-06 13:20:32.571360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.571395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.583800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016edece0 00:26:26.184 [2024-12-06 13:20:32.585196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.585231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.595249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef8e88 00:26:26.184 [2024-12-06 13:20:32.596472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.596506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.606705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eef6a8 00:26:26.184 [2024-12-06 13:20:32.607774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.607810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.620917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef3a28 00:26:26.184 [2024-12-06 13:20:32.622642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.622680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.629515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee9e10 00:26:26.184 [2024-12-06 13:20:32.630256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.630291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.641755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef2948 00:26:26.184 [2024-12-06 13:20:32.642483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.642521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.655981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef7538 00:26:26.184 [2024-12-06 13:20:32.656901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.656940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.667470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef8618 00:26:26.184 [2024-12-06 13:20:32.668270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.668307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.678901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeaab8 00:26:26.184 [2024-12-06 13:20:32.679509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.679546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.692657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efac10 00:26:26.184 [2024-12-06 13:20:32.694102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.694137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:26.184 [2024-12-06 13:20:32.704138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee6b70 00:26:26.184 [2024-12-06 13:20:32.705407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.184 [2024-12-06 13:20:32.705457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.715580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef5be8 00:26:26.443 [2024-12-06 13:20:32.716670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.716708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.727050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efe2e8 00:26:26.443 [2024-12-06 13:20:32.728002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.728039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.738507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016edf550 00:26:26.443 [2024-12-06 13:20:32.739300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.739340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.752384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef6020 00:26:26.443 [2024-12-06 13:20:32.753814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.753887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.765977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee1b48 00:26:26.443 [2024-12-06 13:20:32.767769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.767824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.777525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeee38 00:26:26.443 [2024-12-06 13:20:32.779152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.779203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.788966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef96f8 00:26:26.443 [2024-12-06 13:20:32.790411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.790450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.800452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef5378 00:26:26.443 [2024-12-06 13:20:32.801747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.801784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:26.443 20934.00 IOPS, 81.77 MiB/s [2024-12-06T13:20:32.971Z] [2024-12-06 13:20:32.813292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee88f8 00:26:26.443 [2024-12-06 13:20:32.814424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.814461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.825037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee8088 00:26:26.443 [2024-12-06 13:20:32.826192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.826229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.839702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef7100 00:26:26.443 [2024-12-06 13:20:32.841564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.841610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.848493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ede470 00:26:26.443 [2024-12-06 13:20:32.849327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.849364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.863214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee7c50 00:26:26.443 [2024-12-06 13:20:32.864855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.864891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.876422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eec408 00:26:26.443 [2024-12-06 13:20:32.877938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.877981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.889418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee0a68 00:26:26.443 [2024-12-06 13:20:32.891197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.891233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.903104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee5658 00:26:26.443 [2024-12-06 13:20:32.904608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.904643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.915634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efda78 00:26:26.443 [2024-12-06 13:20:32.917049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.443 [2024-12-06 13:20:32.917083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:26.443 [2024-12-06 13:20:32.927969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee9168 00:26:26.443 [2024-12-06 13:20:32.929238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.444 [2024-12-06 13:20:32.929280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:26.444 [2024-12-06 13:20:32.939703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef9b30 00:26:26.444 [2024-12-06 13:20:32.940888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.444 [2024-12-06 13:20:32.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:26.444 [2024-12-06 13:20:32.954143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef9f68 00:26:26.444 [2024-12-06 13:20:32.956031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.444 [2024-12-06 13:20:32.956070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:26.444 [2024-12-06 13:20:32.962838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef9f68 00:26:26.444 [2024-12-06 13:20:32.963731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.444 [2024-12-06 13:20:32.963767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:32.977420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef9b30 00:26:26.703 [2024-12-06 13:20:32.978994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:32.979032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:32.988708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeb760 00:26:26.703 [2024-12-06 13:20:32.990038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:32.990082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.000552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efbcf0 00:26:26.703 [2024-12-06 13:20:33.001825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.001863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.014961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef20d8 00:26:26.703 [2024-12-06 13:20:33.016908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.016952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.023533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eedd58 00:26:26.703 [2024-12-06 13:20:33.024498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.024533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.037931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eddc00 00:26:26.703 [2024-12-06 13:20:33.039611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.039647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.049467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef5378 00:26:26.703 [2024-12-06 13:20:33.050849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.050891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.061278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee2c28 00:26:26.703 [2024-12-06 13:20:33.062624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.075668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efef90 00:26:26.703 [2024-12-06 13:20:33.077711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.077746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.084299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee6300 00:26:26.703 [2024-12-06 13:20:33.085352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.085387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.098734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef3a28 00:26:26.703 [2024-12-06 13:20:33.100492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.100532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.107444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eed0b0 00:26:26.703 [2024-12-06 13:20:33.108197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.108235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.121895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef0350 00:26:26.703 [2024-12-06 13:20:33.123341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.123379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.133131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef20d8 00:26:26.703 [2024-12-06 13:20:33.134275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.134312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.703 [2024-12-06 13:20:33.144909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ede470 00:26:26.703 [2024-12-06 13:20:33.146073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.703 [2024-12-06 13:20:33.146112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:26.704 [2024-12-06 13:20:33.159676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee9168 00:26:26.704 [2024-12-06 13:20:33.161539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.704 [2024-12-06 13:20:33.161584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:26.704 [2024-12-06 13:20:33.168359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eef270 00:26:26.704 [2024-12-06 13:20:33.169191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.704 [2024-12-06 13:20:33.169226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:26.704 [2024-12-06 13:20:33.182770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee88f8 00:26:26.704 [2024-12-06 13:20:33.184301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.704 [2024-12-06 13:20:33.184336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:26.704 [2024-12-06 13:20:33.193983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef9f68 00:26:26.704 [2024-12-06 13:20:33.195182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.704 [2024-12-06 13:20:33.195218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:26.704 [2024-12-06 13:20:33.205707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee1f80 00:26:26.704 [2024-12-06 13:20:33.206918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.704 [2024-12-06 13:20:33.206966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:26.704 [2024-12-06 13:20:33.220224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeb760 00:26:26.704 [2024-12-06 13:20:33.222125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.704 [2024-12-06 13:20:33.222164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:26.704 [2024-12-06 13:20:33.228850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeaef0 00:26:26.963 [2024-12-06 13:20:33.229765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.229803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.243390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee3d08 00:26:26.963 [2024-12-06 13:20:33.245009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.245047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.254708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eedd58 00:26:26.963 [2024-12-06 13:20:33.256026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.256067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.266499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eef270 00:26:26.963 [2024-12-06 13:20:33.267641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.267678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.277938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee8d30 00:26:26.963 [2024-12-06 13:20:33.278895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.278930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.290517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee38d0 00:26:26.963 [2024-12-06 13:20:33.291812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.291852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.305618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef7100 00:26:26.963 [2024-12-06 13:20:33.307617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.307654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.314233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef35f0 00:26:26.963 [2024-12-06 13:20:33.315230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.315268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.328634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee9168 00:26:26.963 [2024-12-06 13:20:33.330312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.330347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.339877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efcdd0 00:26:26.963 [2024-12-06 13:20:33.341413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.341454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.351265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee6b70 00:26:26.963 [2024-12-06 13:20:33.352298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.352338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.362769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee4de8 00:26:26.963 [2024-12-06 13:20:33.363687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.963 [2024-12-06 13:20:33.363727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-12-06 13:20:33.378043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee1b48 00:26:26.963 [2024-12-06 13:20:33.379960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.380001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.389516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efb480 00:26:26.964 [2024-12-06 13:20:33.391246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.391283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.401053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eef270 00:26:26.964 [2024-12-06 13:20:33.402634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.402670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.412543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efef90 00:26:26.964 [2024-12-06 13:20:33.414062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.414112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.424365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee1710 00:26:26.964 [2024-12-06 13:20:33.425771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.425809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.435621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eee190 00:26:26.964 [2024-12-06 13:20:33.436742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.436779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.447389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee9e10 00:26:26.964 [2024-12-06 13:20:33.448504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.448539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.459642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee0a68 00:26:26.964 [2024-12-06 13:20:33.460752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.460788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.471124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee1710 00:26:26.964 [2024-12-06 13:20:33.472088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.472126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:26.964 [2024-12-06 13:20:33.482542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee7818 00:26:26.964 [2024-12-06 13:20:33.483341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.964 [2024-12-06 13:20:33.483380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.496800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee5220 00:26:27.233 [2024-12-06 13:20:33.497887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.497965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.508587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee95a0 00:26:27.233 [2024-12-06 13:20:33.509480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.509523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.520514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efef90 00:26:27.233 [2024-12-06 13:20:33.521825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.521866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.535087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efc128 00:26:27.233 [2024-12-06 13:20:33.537100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.537141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.543747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee6738 00:26:27.233 [2024-12-06 13:20:33.544759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.544797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.558293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee9168 00:26:27.233 [2024-12-06 13:20:33.560047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.560090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.569833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee1710 00:26:27.233 [2024-12-06 13:20:33.571360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.571406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.581963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef20d8 00:26:27.233 [2024-12-06 13:20:33.583422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.583466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.593552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efc128 00:26:27.233 [2024-12-06 13:20:33.594710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.594753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.605462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eeff18 00:26:27.233 [2024-12-06 13:20:33.606567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.233 [2024-12-06 13:20:33.606606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:27.233 [2024-12-06 13:20:33.620081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef8618 00:26:27.233 [2024-12-06 13:20:33.621900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.621951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.628950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016edf118 00:26:27.234 [2024-12-06 13:20:33.629779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.629820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.643862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef96f8 00:26:27.234 [2024-12-06 13:20:33.645370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.645410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.655187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef6890 00:26:27.234 [2024-12-06 13:20:33.656353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.656393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.666928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef0ff8 00:26:27.234 [2024-12-06 13:20:33.668123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.668162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.681560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ef5378 00:26:27.234 [2024-12-06 13:20:33.683514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.683556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.690275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eed0b0 00:26:27.234 [2024-12-06 13:20:33.691152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.691198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.704890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eee5c8 00:26:27.234 [2024-12-06 13:20:33.706514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.706556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.716420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eea248 00:26:27.234 [2024-12-06 13:20:33.717759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.717800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.728375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efb480 00:26:27.234 [2024-12-06 13:20:33.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.729726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.743202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee95a0 00:26:27.234 [2024-12-06 13:20:33.745193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.745234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:27.234 [2024-12-06 13:20:33.751990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efac10 00:26:27.234 [2024-12-06 13:20:33.752992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.234 [2024-12-06 13:20:33.753032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:27.492 [2024-12-06 13:20:33.766784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016eed4e8 00:26:27.492 [2024-12-06 13:20:33.768502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.492 [2024-12-06 13:20:33.768541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:27.492 [2024-12-06 13:20:33.778377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee6738 00:26:27.492 [2024-12-06 13:20:33.779827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.492 [2024-12-06 13:20:33.779870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:27.492 [2024-12-06 13:20:33.790492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016efe2e8 00:26:27.492 [2024-12-06 13:20:33.791930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.493 [2024-12-06 13:20:33.791997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:27.493 [2024-12-06 13:20:33.802703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fde60) with pdu=0x200016ee9e10 00:26:27.493 [2024-12-06 13:20:33.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.493 [2024-12-06 13:20:33.804018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:27.493 20877.00 IOPS, 81.55 MiB/s 00:26:27.493 Latency(us) 00:26:27.493 [2024-12-06T13:20:34.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.493 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:27.493 nvme0n1 : 2.00 20878.65 81.56 0.00 0.00 6123.10 2502.28 16086.11 00:26:27.493 [2024-12-06T13:20:34.021Z] =================================================================================================================== 00:26:27.493 [2024-12-06T13:20:34.021Z] Total : 20878.65 81.56 0.00 0.00 6123.10 2502.28 16086.11 00:26:27.493 { 00:26:27.493 "results": [ 00:26:27.493 { 00:26:27.493 "job": "nvme0n1", 00:26:27.493 "core_mask": "0x2", 00:26:27.493 "workload": "randwrite", 00:26:27.493 "status": "finished", 00:26:27.493 "queue_depth": 128, 00:26:27.493 "io_size": 4096, 00:26:27.493 "runtime": 2.004392, 00:26:27.493 "iops": 20878.65048353815, 00:26:27.493 "mibps": 81.5572284513209, 00:26:27.493 "io_failed": 0, 00:26:27.493 "io_timeout": 0, 00:26:27.493 "avg_latency_us": 6123.0983781952, 00:26:27.493 "min_latency_us": 2502.2836363636366, 00:26:27.493 "max_latency_us": 16086.10909090909 00:26:27.493 } 00:26:27.493 ], 00:26:27.493 "core_count": 1 00:26:27.493 } 00:26:27.493 13:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:27.493 13:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:27.493 13:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:27.493 13:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:27.493 | .driver_specific 00:26:27.493 | .nvme_error 00:26:27.493 | .status_code 00:26:27.493 | .command_transient_transport_error' 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94703 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94703 ']' 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94703 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94703 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.750 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.750 killing process with pid 94703 00:26:27.751 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94703' 00:26:27.751 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.751 00:26:27.751 Latency(us) 00:26:27.751 [2024-12-06T13:20:34.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.751 [2024-12-06T13:20:34.279Z] =================================================================================================================== 00:26:27.751 [2024-12-06T13:20:34.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.751 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94703 00:26:27.751 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94703 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94780 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94780 /var/tmp/bperf.sock 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94780 ']' 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.009 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.009 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.009 Zero copy mechanism will not be used. 00:26:28.009 [2024-12-06 13:20:34.376866] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:28.009 [2024-12-06 13:20:34.376968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94780 ] 00:26:28.009 [2024-12-06 13:20:34.518490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.267 [2024-12-06 13:20:34.552066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.267 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.267 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:28.267 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.267 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.526 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:28.526 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.526 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.526 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.526 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.526 13:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.784 nvme0n1 00:26:29.041 13:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:29.041 13:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.041 13:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.041 13:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.041 13:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.041 13:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.041 Zero copy mechanism will not be used. 00:26:29.041 Running I/O for 2 seconds... 00:26:29.041 [2024-12-06 13:20:35.457118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.041 [2024-12-06 13:20:35.457248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.457278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.462914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.463044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.463070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.468466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.468570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.468598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.473836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.473937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.473976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.479298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.479538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.479588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.484811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.484928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.484971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.490342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.490452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.490479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.496482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.496593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.496618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.502012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.502137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.502161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.507499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.507613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.507639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.512842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.512967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.512992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.518265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.518378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.518404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.523625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.523712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.523737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.529010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.529117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.529142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.534322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.534433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.534458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.539710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.539813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.539841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.545078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.545222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.545252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.550546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.550684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.550713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.556034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.556161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.556194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.561607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.561747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.561778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.042 [2024-12-06 13:20:35.567115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.042 [2024-12-06 13:20:35.567264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.042 [2024-12-06 13:20:35.567295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.572595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.572727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.572756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.578189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.579074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.579210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.584418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.584778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.584824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.589303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.589571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.589623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.594179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.594420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.594464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.599014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.599261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.599306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.603782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.604079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.604124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.608643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.608891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.608932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.613482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.613700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.613746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.618232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.618511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.618551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.623044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.623341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.623382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.627817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.628125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.628178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.632691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.632916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.632990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.637532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.637729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.637775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.642352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.642600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.642646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.647233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.647446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.647480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.652010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.652249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.652299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.656756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.657020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.657060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.661570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.661830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.661875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.666465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.666781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.666842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.671233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.671577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.676021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.676260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.301 [2024-12-06 13:20:35.676318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.301 [2024-12-06 13:20:35.680703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.301 [2024-12-06 13:20:35.680929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.680994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.685606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.685829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.685875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.690456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.690675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.690721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.695252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.695472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.695510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.700056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.700268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.700307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.704871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.705088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.705127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.709595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.709820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.709849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.714449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.714684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.714725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.719227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.719447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.719487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.723985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.724189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.724228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.728715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.728904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.728959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.733437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.733644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.733686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.738144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.738368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.738409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.742833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.743060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.743099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.747639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.747860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.747900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.752375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.752588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.752629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.757108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.757297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.757338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.761857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.762069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.762108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.766553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.766743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.766782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.771274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.771477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.771518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.776120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.776332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.776372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.780835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.781053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.781092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.785609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.785798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.790296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.790494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.790535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.795060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.795260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.795299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.799851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.800074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.800111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.804583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.804770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.804810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.809283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.809500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.809541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.814102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.814302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.814340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.818858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.819101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.819169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.302 [2024-12-06 13:20:35.823685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.302 [2024-12-06 13:20:35.823856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.302 [2024-12-06 13:20:35.823892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.828530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.828722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.828772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.833329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.833497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.833533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.838159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.838325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.838359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.843012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.843216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.843250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.847801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.848019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.848054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.852647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.852860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.852892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.857486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.857657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.857691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.862262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.862455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.862488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.867087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.867301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.867334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.871936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.872125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.872157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.876690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.876856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.876888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.881490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.881657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.881689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.886238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.886416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.886447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.890986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.891216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.891258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-12-06 13:20:35.895662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.561 [2024-12-06 13:20:35.895865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-12-06 13:20:35.895911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.900342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.900540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.900585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.905193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.905343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.905377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.909920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.910091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.910114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.914633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.914794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.914827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.919398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.919566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.919599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.924195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.924368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.924392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.929019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.929196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.929229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.934175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.934349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.934372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.939185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.939355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.939389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.944703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.944849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.944872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.949309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.949449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.949472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.953881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.954046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.954069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.958519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.958659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.958681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.963375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.963561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.963583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.968736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.968907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.968929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.974019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.974195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.974217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.978669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.978835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.978864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.983255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.983420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.983443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.987838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.988010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.988032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.992394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.992556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.992584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:35.996975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:35.997137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:35.997176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:36.001545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:36.001705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:36.001729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:36.006187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:36.006338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:36.006363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:36.010721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:36.010860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:36.010884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:36.015313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:36.015465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:36.015489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:36.019901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:36.020059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:36.020083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:36.024418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.562 [2024-12-06 13:20:36.024580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-12-06 13:20:36.024604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-12-06 13:20:36.029026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.029166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.029190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.033549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.033702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.033726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.038113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.038261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.038286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.042647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.042800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.042826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.047226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.047375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.047399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.051900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.052079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.052103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.056893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.057096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.057121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.061880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.062035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.062059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.066463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.066606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.066629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.071081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.071255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.071279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.075638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.075803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.075826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.080232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.080376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.080399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.563 [2024-12-06 13:20:36.084966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.563 [2024-12-06 13:20:36.085122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.563 [2024-12-06 13:20:36.085145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.822 [2024-12-06 13:20:36.090188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.822 [2024-12-06 13:20:36.090371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.822 [2024-12-06 13:20:36.090394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.822 [2024-12-06 13:20:36.094958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.822 [2024-12-06 13:20:36.095121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.822 [2024-12-06 13:20:36.095145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.822 [2024-12-06 13:20:36.099509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.822 [2024-12-06 13:20:36.099652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.822 [2024-12-06 13:20:36.099693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.822 [2024-12-06 13:20:36.104029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.822 [2024-12-06 13:20:36.104170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.104199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.108557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.108702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.108726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.113127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.113294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.113318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.117750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.117915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.117960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.122472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.122644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.122669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.127901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.128081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.128104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.132631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.132803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.132826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.137234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.137391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.137416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.141822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.142011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.142034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.146420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.146571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.146593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.151024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.151192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.151215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.155668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.155829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.155854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.160929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.161109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.161132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.165793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.165985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.166008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.170391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.170599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.175010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.175194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.175220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.179602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.179772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.179797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.184272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.184457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.184483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.189022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.189194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.189220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.194153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.194323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.194347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.199185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.199356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.199381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.203717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.203893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.203919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.208337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.208530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.208556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.212977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.213162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.213188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.217658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.217810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.217836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-12-06 13:20:36.222359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.823 [2024-12-06 13:20:36.222505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-12-06 13:20:36.222527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.227249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.227390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.227418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.232476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.232646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.232681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.237066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.237220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.237247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.241691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.241835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.241866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.246329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.246479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.246503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.250901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.251112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.251138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.255529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.255696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.255720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.260263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.260409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.260434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.265698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.265855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.265879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.270362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.270524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.270549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.274985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.275142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.275178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.279617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.279776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.279799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.284214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.284376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.284410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.288867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.289028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.289052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.293554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.293693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.293717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.298969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.299109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.299133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.303509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.303650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.303679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.308073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.308228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.308251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.312612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.312771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.312795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.317222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.317364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.317387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.321888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.322052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.322076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.326455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.326606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.326628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.331111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.331275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.331304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.335737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.335887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.335921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.340321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.340481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.340513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-12-06 13:20:36.345275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:29.824 [2024-12-06 13:20:36.345429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-12-06 13:20:36.345460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.084 [2024-12-06 13:20:36.350275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.084 [2024-12-06 13:20:36.350418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.084 [2024-12-06 13:20:36.350441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.084 [2024-12-06 13:20:36.354842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.084 [2024-12-06 13:20:36.354996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.084 [2024-12-06 13:20:36.355020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.084 [2024-12-06 13:20:36.359464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.084 [2024-12-06 13:20:36.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.084 [2024-12-06 13:20:36.359628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.084 [2024-12-06 13:20:36.364126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.084 [2024-12-06 13:20:36.364266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.084 [2024-12-06 13:20:36.364289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.084 [2024-12-06 13:20:36.368728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.084 [2024-12-06 13:20:36.368895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.084 [2024-12-06 13:20:36.368921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.084 [2024-12-06 13:20:36.373304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.084 [2024-12-06 13:20:36.373468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.084 [2024-12-06 13:20:36.373491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.084 [2024-12-06 13:20:36.377850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.084 [2024-12-06 13:20:36.378024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.378048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.382446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.382606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.382628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.387003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.387177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.391577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.391749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.391789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.396196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.396337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.396359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.400745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.400884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.400907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.405307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.405455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.405479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.409859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.410045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.410067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.414490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.414630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.414653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.419037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.419209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.419232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.423572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.423742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.423763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.428178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.428319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.428341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.432734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.432893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.432916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.437383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.437523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.437545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.441924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.442079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.442101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.446549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.446711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.446733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 6378.00 IOPS, 797.25 MiB/s [2024-12-06T13:20:36.613Z] [2024-12-06 13:20:36.452358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.452454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.452477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.457285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.457368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.457392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.462046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.462215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.462254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.466603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.466738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.466770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.471172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.471259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.471295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.476228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.476370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.476393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.481620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.481708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.481730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.486375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.486471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.486493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.490988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.491074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.491096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.495579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.495670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.495692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-12-06 13:20:36.500233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.085 [2024-12-06 13:20:36.500331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-12-06 13:20:36.500353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.505509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.505606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.505628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.510117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.510206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.510229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.514654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.514789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.514811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.519243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.519332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.519354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.523862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.524022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.524055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.528535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.528763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.528805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.533243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.533337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.533361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.538515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.538630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.538654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.543324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.543408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.543431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.547838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.547928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.547967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.552429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.552569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.552593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.556958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.557060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.557083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.561404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.561649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.561686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.565985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.566077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.566102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.570994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.571108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.571133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.576216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.576309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.576333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.580820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.580909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.580931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.585387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.585477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.585499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.589989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.590078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.590101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.594560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.594657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.594679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.599465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.599558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.599581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-12-06 13:20:36.604277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.086 [2024-12-06 13:20:36.604359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-12-06 13:20:36.604395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.609693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.609787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.609810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.614282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.614372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.614394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.618809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.618889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.618911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.623392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.623486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.623509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.628010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.628101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.628123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.632596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.632685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.632707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.637351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.637460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.637486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.642694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.642809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.642835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.647535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.647629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.647655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.652157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.652256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.652282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.656791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.656891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.656916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.661469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.661609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.661634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.666138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.666238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.666263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.670819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.670927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.670966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.676094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.345 [2024-12-06 13:20:36.676243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.345 [2024-12-06 13:20:36.676269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.345 [2024-12-06 13:20:36.681104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.681217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.681242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.685708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.685813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.685839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.690379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.690484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.690509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.694936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.695039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.695064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.699541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.699630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.699654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.704234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.704350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.704376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.709241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.709348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.709374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.714443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.714552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.714579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.719133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.719244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.719271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.723792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.723889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.723915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.728424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.728530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.728555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.733122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.733213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.733239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.737736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.737851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.737877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.742665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.742772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.742798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.748087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.748194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.748222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.752695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.752796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.752822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.757328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.757435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.757460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.761968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.762084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.762110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.766582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.766684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.766710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.771230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.771325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.771352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.775819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.775909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.775935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.780564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.780663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.780690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.785152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.785259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.785283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.789754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.789848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.789871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.794323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.794426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.794448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.798909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.799029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.799051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.346 [2024-12-06 13:20:36.803527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.346 [2024-12-06 13:20:36.803618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.346 [2024-12-06 13:20:36.803640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.808074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.808168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.808190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.812652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.812744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.812765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.817270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.817366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.817388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.821835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.821926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.821962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.826466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.826553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.826575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.831015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.831136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.831168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.835583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.835669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.835696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.840154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.840243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.840267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.844794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.844880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.844903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.849368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.849508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.849532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.854029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.854160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.854193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.858598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.858693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.858716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.863267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.863363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.863385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.347 [2024-12-06 13:20:36.867890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.347 [2024-12-06 13:20:36.867995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-12-06 13:20:36.868025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.872542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.872628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.872651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.877077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.877194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.877218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.881686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.881774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.881798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.886317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.886404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.886427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.890878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.890983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.891004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.895520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.895618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.895640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.900132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.900225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.900248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.904674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.904788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.904810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.909301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.909381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.909403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.913892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.914002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.914025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.918566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.918647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.918680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.923180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.923272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.606 [2024-12-06 13:20:36.923294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.606 [2024-12-06 13:20:36.927737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.606 [2024-12-06 13:20:36.927829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.927851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.932383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.932470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.932492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.937003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.937091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.937114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.941571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.941660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.941682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.946143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.946261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.950657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.950749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.950771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.955306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.955392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.955414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.959896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.959992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.960015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.964532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.964623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.964645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.969095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.969182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.969203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.973610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.973697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.973719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.978203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.978345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.978367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.982764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.982858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.982881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.987433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.987523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.987547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.991967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.992080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.992102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:36.996517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:36.996620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:36.996642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.001097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.001430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.001473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.005517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.005733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.005778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.010058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.010165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.010190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.014631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.014726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.014749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.019311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.019399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.019422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.023861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.023997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.024020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.028465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.028603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.028625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.032998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.033101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.033123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.037625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.037714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.607 [2024-12-06 13:20:37.037736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.607 [2024-12-06 13:20:37.042212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.607 [2024-12-06 13:20:37.042298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.042321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.046743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.046838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.051391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.051483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.051506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.055971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.056062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.056084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.060574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.060671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.060694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.065165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.065255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.065277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.069789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.069870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.069893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.074341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.074422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.074444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.078908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.079003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.079026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.083455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.083577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.083600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.087996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.088083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.088105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.092598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.092706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.092727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.097205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.097299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.097321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.101783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.101878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.101900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.106334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.106428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.106449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.110914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.111018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.111041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.115577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.115672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.115694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.120150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.120244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.120265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.124705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.124828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.124860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.608 [2024-12-06 13:20:37.129382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.608 [2024-12-06 13:20:37.129461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.608 [2024-12-06 13:20:37.129484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.134009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.134095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.134116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.138594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.138707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.138736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.143258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.143344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.143366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.147874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.147967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.147991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.152494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.152584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.152606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.157057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.157147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.157169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.161649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.161754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.161776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.166247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.166334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.166356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.170857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.170960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.175521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.175609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.175631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.180091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.180219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.180241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.184628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.184721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.184742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.189284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.189369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.189391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.194064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.194153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.194176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.199393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.199475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.199497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.204362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.204464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.204486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.208987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.209101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.209125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.213630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.213724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.213746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.218177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.218311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.218332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.222717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.222826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.222847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.227388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.227478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.227500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.231918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.232038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.236477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.867 [2024-12-06 13:20:37.236608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.867 [2024-12-06 13:20:37.236630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.867 [2024-12-06 13:20:37.241055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.241156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.241178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.245607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.245697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.245718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.250228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.250308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.250330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.254821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.254907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.254929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.259415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.259510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.259532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.264049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.264176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.264198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.268584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.268687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.268709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.273224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.273320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.273342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.277769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.277864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.277886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.282338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.282437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.282458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.286967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.287053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.287074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.291637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.291754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.291776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.296224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.296315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.296337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.300897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.300994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.301017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.305432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.305521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.305543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.310040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.310131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.310153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.314623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.314716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.314738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.319240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.319333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.319354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.323815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.323957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.323979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.328415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.328539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.328561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.332967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.333052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.333075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.337588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.337680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.337702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.342560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.342747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.342781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.347230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.347324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.347348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.351823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.351913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.351935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.356429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.356530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.356554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.361060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.361166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.361189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.365618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.365707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.365731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.370239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.370332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.370354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.374826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.374982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.375005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.379466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.379546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.379569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.384148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.384232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.384255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.868 [2024-12-06 13:20:37.388732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:30.868 [2024-12-06 13:20:37.389102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.868 [2024-12-06 13:20:37.389152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.393244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.393435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.393472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.397803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.397987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.398028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.403005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.403212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.403244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.407577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.407752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.407775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.412219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.412374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.412398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.416835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.417007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.417032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.421473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.421628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.421666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.426115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.426274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.426300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.430694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.430849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.126 [2024-12-06 13:20:37.430873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.126 [2024-12-06 13:20:37.435309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.126 [2024-12-06 13:20:37.435489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.127 [2024-12-06 13:20:37.435513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.127 [2024-12-06 13:20:37.439917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.127 [2024-12-06 13:20:37.440107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.127 [2024-12-06 13:20:37.440131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.127 [2024-12-06 13:20:37.444535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.127 [2024-12-06 13:20:37.444716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.127 [2024-12-06 13:20:37.444739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.127 [2024-12-06 13:20:37.449132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16fe1a0) with pdu=0x200016eff3c8 00:26:31.127 [2024-12-06 13:20:37.449289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.127 [2024-12-06 13:20:37.449312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.127 6512.00 IOPS, 814.00 MiB/s 00:26:31.127 Latency(us) 00:26:31.127 [2024-12-06T13:20:37.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.127 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:31.127 nvme0n1 : 2.00 6510.35 813.79 0.00 0.00 2452.05 1906.50 10545.34 00:26:31.127 [2024-12-06T13:20:37.655Z] =================================================================================================================== 00:26:31.127 [2024-12-06T13:20:37.655Z] Total : 6510.35 813.79 0.00 0.00 2452.05 1906.50 10545.34 00:26:31.127 { 00:26:31.127 "results": [ 00:26:31.127 { 00:26:31.127 "job": "nvme0n1", 00:26:31.127 "core_mask": "0x2", 00:26:31.127 "workload": "randwrite", 00:26:31.127 "status": "finished", 00:26:31.127 "queue_depth": 16, 00:26:31.127 "io_size": 131072, 00:26:31.127 "runtime": 2.002965, 00:26:31.127 "iops": 6510.348408484422, 00:26:31.127 "mibps": 813.7935510605528, 00:26:31.127 "io_failed": 0, 00:26:31.127 "io_timeout": 0, 00:26:31.127 "avg_latency_us": 2452.046527607362, 00:26:31.127 "min_latency_us": 1906.5018181818182, 00:26:31.127 "max_latency_us": 10545.338181818182 00:26:31.127 } 00:26:31.127 ], 00:26:31.127 "core_count": 1 00:26:31.127 } 00:26:31.127 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.127 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.127 | .driver_specific 00:26:31.127 | .nvme_error 00:26:31.127 | .status_code 00:26:31.127 | .command_transient_transport_error' 00:26:31.127 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.127 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94780 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94780 ']' 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94780 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94780 00:26:31.385 killing process with pid 94780 00:26:31.385 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.385 00:26:31.385 Latency(us) 00:26:31.385 [2024-12-06T13:20:37.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.385 [2024-12-06T13:20:37.913Z] =================================================================================================================== 00:26:31.385 [2024-12-06T13:20:37.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94780' 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94780 00:26:31.385 13:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94780 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94530 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94530 ']' 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94530 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94530 00:26:31.642 killing process with pid 94530 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94530' 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94530 00:26:31.642 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94530 00:26:31.901 00:26:31.901 real 0m15.381s 00:26:31.901 user 0m30.260s 00:26:31.901 sys 0m4.188s 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.901 ************************************ 00:26:31.901 END TEST nvmf_digest_error 00:26:31.901 ************************************ 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.901 rmmod nvme_tcp 00:26:31.901 rmmod nvme_fabrics 00:26:31.901 rmmod nvme_keyring 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94530 ']' 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94530 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 94530 ']' 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 94530 00:26:31.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94530) - No such process 00:26:31.901 Process with pid 94530 is not found 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 94530 is not found' 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:31.901 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:26:32.159 00:26:32.159 real 0m32.501s 00:26:32.159 user 1m2.362s 00:26:32.159 sys 0m8.657s 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.159 ************************************ 00:26:32.159 END TEST nvmf_digest 00:26:32.159 ************************************ 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.159 ************************************ 00:26:32.159 START TEST nvmf_mdns_discovery 00:26:32.159 ************************************ 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:32.159 * Looking for test storage... 00:26:32.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:26:32.159 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:32.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.419 --rc genhtml_branch_coverage=1 00:26:32.419 --rc genhtml_function_coverage=1 00:26:32.419 --rc genhtml_legend=1 00:26:32.419 --rc geninfo_all_blocks=1 00:26:32.419 --rc geninfo_unexecuted_blocks=1 00:26:32.419 00:26:32.419 ' 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:32.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.419 --rc genhtml_branch_coverage=1 00:26:32.419 --rc genhtml_function_coverage=1 00:26:32.419 --rc genhtml_legend=1 00:26:32.419 --rc geninfo_all_blocks=1 00:26:32.419 --rc geninfo_unexecuted_blocks=1 00:26:32.419 00:26:32.419 ' 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:32.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.419 --rc genhtml_branch_coverage=1 00:26:32.419 --rc genhtml_function_coverage=1 00:26:32.419 --rc genhtml_legend=1 00:26:32.419 --rc geninfo_all_blocks=1 00:26:32.419 --rc geninfo_unexecuted_blocks=1 00:26:32.419 00:26:32.419 ' 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:32.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.419 --rc genhtml_branch_coverage=1 00:26:32.419 --rc genhtml_function_coverage=1 00:26:32.419 --rc genhtml_legend=1 00:26:32.419 --rc geninfo_all_blocks=1 00:26:32.419 --rc geninfo_unexecuted_blocks=1 00:26:32.419 00:26:32.419 ' 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:32.419 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.420 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:32.420 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:32.421 Cannot find device "nvmf_init_br" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:32.421 Cannot find device "nvmf_init_br2" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:32.421 Cannot find device "nvmf_tgt_br" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:32.421 Cannot find device "nvmf_tgt_br2" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:32.421 Cannot find device "nvmf_init_br" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:32.421 Cannot find device "nvmf_init_br2" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:32.421 Cannot find device "nvmf_tgt_br" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:32.421 Cannot find device "nvmf_tgt_br2" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:32.421 Cannot find device "nvmf_br" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:32.421 Cannot find device "nvmf_init_if" 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:26:32.421 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:32.680 Cannot find device "nvmf_init_if2" 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:32.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:32.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:32.680 13:20:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:32.680 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:32.680 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:26:32.680 00:26:32.680 --- 10.0.0.3 ping statistics --- 00:26:32.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.680 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:32.680 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:32.680 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:26:32.680 00:26:32.680 --- 10.0.0.4 ping statistics --- 00:26:32.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.680 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:32.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:32.680 00:26:32.680 --- 10.0.0.1 ping statistics --- 00:26:32.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.680 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:32.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:26:32.680 00:26:32.680 --- 10.0.0.2 ping statistics --- 00:26:32.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.680 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.680 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=95116 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 95116 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95116 ']' 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.939 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.939 [2024-12-06 13:20:39.282747] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:32.939 [2024-12-06 13:20:39.283465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.939 [2024-12-06 13:20:39.462556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.197 [2024-12-06 13:20:39.508468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.197 [2024-12-06 13:20:39.508537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.197 [2024-12-06 13:20:39.508554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.197 [2024-12-06 13:20:39.508567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.197 [2024-12-06 13:20:39.508578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.197 [2024-12-06 13:20:39.509264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 [2024-12-06 13:20:39.671102] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 [2024-12-06 13:20:39.679271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 null0 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 null1 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 null2 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.197 null3 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.197 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95151 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95151 /tmp/host.sock 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95151 ']' 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.455 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.455 13:20:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.455 [2024-12-06 13:20:39.784732] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:33.455 [2024-12-06 13:20:39.784830] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95151 ] 00:26:33.455 [2024-12-06 13:20:39.934777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.455 [2024-12-06 13:20:39.975138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.712 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.712 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:33.712 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:26:33.712 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:26:33.712 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:26:33.969 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95169 00:26:33.969 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:26:33.970 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:26:33.970 13:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:26:33.970 Process 1065 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:26:33.970 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:26:33.970 Successfully dropped root privileges. 00:26:33.970 avahi-daemon 0.8 starting up. 00:26:33.970 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:26:33.970 Successfully called chroot(). 00:26:33.970 Successfully dropped remaining capabilities. 00:26:34.904 No service file found in /etc/avahi/services. 00:26:34.904 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:26:34.904 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:26:34.904 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:26:34.904 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:26:34.904 Network interface enumeration completed. 00:26:34.904 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:26:34.904 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:26:34.904 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:26:34.904 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:26:34.904 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3452398953. 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:34.904 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:35.162 [2024-12-06 13:20:41.577546] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.163 [2024-12-06 13:20:41.639754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.163 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.421 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.421 13:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:26:35.990 [2024-12-06 13:20:42.477547] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:36.558 [2024-12-06 13:20:42.877573] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:36.558 [2024-12-06 13:20:42.877622] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:26:36.558 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:36.558 cookie is 0 00:26:36.558 is_local: 1 00:26:36.558 our_own: 0 00:26:36.558 wide_area: 0 00:26:36.558 multicast: 1 00:26:36.558 cached: 1 00:26:36.558 [2024-12-06 13:20:42.977555] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:36.558 [2024-12-06 13:20:42.977604] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:26:36.558 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:36.558 cookie is 0 00:26:36.558 is_local: 1 00:26:36.558 our_own: 0 00:26:36.558 wide_area: 0 00:26:36.558 multicast: 1 00:26:36.558 cached: 1 00:26:37.496 [2024-12-06 13:20:43.878494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.496 [2024-12-06 13:20:43.878569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe48850 with addr=10.0.0.4, port=8009 00:26:37.496 [2024-12-06 13:20:43.878602] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:37.496 [2024-12-06 13:20:43.878621] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:37.496 [2024-12-06 13:20:43.878632] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:26:37.496 [2024-12-06 13:20:43.986269] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:37.496 [2024-12-06 13:20:43.986318] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:37.496 [2024-12-06 13:20:43.986341] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:37.755 [2024-12-06 13:20:44.072413] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:26:37.755 [2024-12-06 13:20:44.126923] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:26:37.755 [2024-12-06 13:20:44.127802] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe7da00:1 started. 00:26:37.755 [2024-12-06 13:20:44.129548] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:26:37.755 [2024-12-06 13:20:44.129577] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:37.755 [2024-12-06 13:20:44.134726] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe7da00 was disconnected and freed. delete nvme_qpair. 00:26:38.691 [2024-12-06 13:20:44.878373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.691 [2024-12-06 13:20:44.878431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe7d800 with addr=10.0.0.4, port=8009 00:26:38.691 [2024-12-06 13:20:44.878454] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:38.691 [2024-12-06 13:20:44.878464] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:38.691 [2024-12-06 13:20:44.878473] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:26:39.622 [2024-12-06 13:20:45.878394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.622 [2024-12-06 13:20:45.878472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe66560 with addr=10.0.0.4, port=8009 00:26:39.622 [2024-12-06 13:20:45.878494] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:39.622 [2024-12-06 13:20:45.878505] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:39.622 [2024-12-06 13:20:45.878515] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:40.200 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:40.200 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:40.200 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:40.200 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.481 [2024-12-06 13:20:46.723328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:26:40.481 [2024-12-06 13:20:46.725582] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:40.481 [2024-12-06 13:20:46.725625] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.481 [2024-12-06 13:20:46.731242] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:26:40.481 [2024-12-06 13:20:46.731598] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.481 13:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:26:40.481 [2024-12-06 13:20:46.861724] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:40.481 [2024-12-06 13:20:46.861812] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:40.481 [2024-12-06 13:20:46.887979] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:26:40.481 [2024-12-06 13:20:46.888025] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:26:40.481 [2024-12-06 13:20:46.888046] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:26:40.481 [2024-12-06 13:20:46.948447] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:40.481 [2024-12-06 13:20:46.976209] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:26:40.742 [2024-12-06 13:20:47.038292] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:26:40.742 [2024-12-06 13:20:47.040304] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0xe7acc0:1 started. 00:26:40.742 [2024-12-06 13:20:47.042980] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:26:40.742 [2024-12-06 13:20:47.043150] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:26:40.742 [2024-12-06 13:20:47.048254] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0xe7acc0 was disconnected and freed. delete nvme_qpair. 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:26:41.310 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:41.310 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:26:41.310 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:41.310 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:41.310 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:41.310 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:41.310 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:26:41.310 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.569 13:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.569 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:26:41.828 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.829 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.829 [2024-12-06 13:20:48.149314] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe7cef0:1 started. 00:26:41.829 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.829 13:20:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:26:41.829 [2024-12-06 13:20:48.157033] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe7cef0 was disconnected and freed. delete nvme_qpair. 00:26:41.829 [2024-12-06 13:20:48.163012] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0xe798d0:1 started. 00:26:41.829 [2024-12-06 13:20:48.166879] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0xe798d0 was disconnected and freed. delete nvme_qpair. 00:26:41.829 [2024-12-06 13:20:48.177562] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:41.829 [2024-12-06 13:20:48.177591] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:26:41.829 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:41.829 cookie is 0 00:26:41.829 is_local: 1 00:26:41.829 our_own: 0 00:26:41.829 wide_area: 0 00:26:41.829 multicast: 1 00:26:41.829 cached: 1 00:26:41.829 [2024-12-06 13:20:48.177608] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:26:41.829 [2024-12-06 13:20:48.277561] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:41.829 [2024-12-06 13:20:48.277616] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:26:41.829 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:41.829 cookie is 0 00:26:41.829 is_local: 1 00:26:41.829 our_own: 0 00:26:41.829 wide_area: 0 00:26:41.829 multicast: 1 00:26:41.829 cached: 1 00:26:41.829 [2024-12-06 13:20:48.277637] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:26:42.765 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.766 [2024-12-06 13:20:49.272865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:42.766 [2024-12-06 13:20:49.273884] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:42.766 [2024-12-06 13:20:49.273924] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:42.766 [2024-12-06 13:20:49.273974] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:26:42.766 [2024-12-06 13:20:49.273990] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.766 [2024-12-06 13:20:49.280803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:26:42.766 [2024-12-06 13:20:49.281879] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:42.766 [2024-12-06 13:20:49.281967] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.766 13:20:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:26:43.024 [2024-12-06 13:20:49.415999] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:26:43.024 [2024-12-06 13:20:49.416417] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:26:43.024 [2024-12-06 13:20:49.481740] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:26:43.024 [2024-12-06 13:20:49.481820] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:26:43.024 [2024-12-06 13:20:49.481834] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:26:43.024 [2024-12-06 13:20:49.481841] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:26:43.025 [2024-12-06 13:20:49.481861] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:26:43.025 [2024-12-06 13:20:49.482446] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:26:43.025 [2024-12-06 13:20:49.482494] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:26:43.025 [2024-12-06 13:20:49.482504] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:43.025 [2024-12-06 13:20:49.482510] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:43.025 [2024-12-06 13:20:49.482527] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:43.025 [2024-12-06 13:20:49.527107] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:26:43.025 [2024-12-06 13:20:49.527136] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:26:43.025 [2024-12-06 13:20:49.528092] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:43.025 [2024-12-06 13:20:49.528113] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:43.961 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.224 [2024-12-06 13:20:50.565864] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:44.224 [2024-12-06 13:20:50.565906] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:44.224 [2024-12-06 13:20:50.565960] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:26:44.224 [2024-12-06 13:20:50.565978] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:26:44.224 [2024-12-06 13:20:50.568122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.568160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.568173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.568183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.568193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.568202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.568212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.568222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.568231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.224 [2024-12-06 13:20:50.577858] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:44.224 [2024-12-06 13:20:50.577920] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:26:44.224 [2024-12-06 13:20:50.578078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.224 [2024-12-06 13:20:50.582064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.582094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.582106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.582116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.582126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.582136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.582146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.224 [2024-12-06 13:20:50.582155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.224 [2024-12-06 13:20:50.582164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.224 13:20:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:26:44.224 [2024-12-06 13:20:50.588096] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.224 [2024-12-06 13:20:50.588123] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.224 [2024-12-06 13:20:50.588130] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.224 [2024-12-06 13:20:50.588137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.224 [2024-12-06 13:20:50.588166] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.224 [2024-12-06 13:20:50.588257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.224 [2024-12-06 13:20:50.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.224 [2024-12-06 13:20:50.588292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.224 [2024-12-06 13:20:50.588310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.224 [2024-12-06 13:20:50.588325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.224 [2024-12-06 13:20:50.588334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.224 [2024-12-06 13:20:50.588345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.224 [2024-12-06 13:20:50.588355] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.224 [2024-12-06 13:20:50.588361] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.224 [2024-12-06 13:20:50.588367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.224 [2024-12-06 13:20:50.592030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.224 [2024-12-06 13:20:50.598176] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.224 [2024-12-06 13:20:50.598200] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.224 [2024-12-06 13:20:50.598207] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.224 [2024-12-06 13:20:50.598213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.224 [2024-12-06 13:20:50.598237] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.224 [2024-12-06 13:20:50.598293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.224 [2024-12-06 13:20:50.598314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.224 [2024-12-06 13:20:50.598325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.224 [2024-12-06 13:20:50.598341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.224 [2024-12-06 13:20:50.598355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.224 [2024-12-06 13:20:50.598364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.224 [2024-12-06 13:20:50.598374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.224 [2024-12-06 13:20:50.598382] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.224 [2024-12-06 13:20:50.598388] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.224 [2024-12-06 13:20:50.598393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.224 [2024-12-06 13:20:50.602043] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.224 [2024-12-06 13:20:50.602067] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.224 [2024-12-06 13:20:50.602074] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.224 [2024-12-06 13:20:50.602079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.224 [2024-12-06 13:20:50.602104] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.224 [2024-12-06 13:20:50.602158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.224 [2024-12-06 13:20:50.602178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.224 [2024-12-06 13:20:50.602189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.224 [2024-12-06 13:20:50.602205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.224 [2024-12-06 13:20:50.602219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.224 [2024-12-06 13:20:50.602228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.224 [2024-12-06 13:20:50.602238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.224 [2024-12-06 13:20:50.602247] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.225 [2024-12-06 13:20:50.602253] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.225 [2024-12-06 13:20:50.602258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.225 [2024-12-06 13:20:50.608248] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.225 [2024-12-06 13:20:50.608273] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.225 [2024-12-06 13:20:50.608279] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.608285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.225 [2024-12-06 13:20:50.608310] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.608361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.225 [2024-12-06 13:20:50.608381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.225 [2024-12-06 13:20:50.608392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.225 [2024-12-06 13:20:50.608416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.225 [2024-12-06 13:20:50.608430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.225 [2024-12-06 13:20:50.608439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.225 [2024-12-06 13:20:50.608449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.225 [2024-12-06 13:20:50.608457] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.225 [2024-12-06 13:20:50.608463] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.225 [2024-12-06 13:20:50.608468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.225 [2024-12-06 13:20:50.612113] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.225 [2024-12-06 13:20:50.612139] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.225 [2024-12-06 13:20:50.612146] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.612152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.225 [2024-12-06 13:20:50.612174] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.612226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.225 [2024-12-06 13:20:50.612246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.225 [2024-12-06 13:20:50.612257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.225 [2024-12-06 13:20:50.612273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.225 [2024-12-06 13:20:50.612287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.225 [2024-12-06 13:20:50.612296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.225 [2024-12-06 13:20:50.612306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.225 [2024-12-06 13:20:50.612314] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.225 [2024-12-06 13:20:50.612320] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.225 [2024-12-06 13:20:50.612325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.225 [2024-12-06 13:20:50.618320] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.225 [2024-12-06 13:20:50.618344] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.225 [2024-12-06 13:20:50.618350] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.618356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.225 [2024-12-06 13:20:50.618378] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.618428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.225 [2024-12-06 13:20:50.618449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.225 [2024-12-06 13:20:50.618459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.225 [2024-12-06 13:20:50.618474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.225 [2024-12-06 13:20:50.618489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.225 [2024-12-06 13:20:50.618498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.225 [2024-12-06 13:20:50.618507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.225 [2024-12-06 13:20:50.618515] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.225 [2024-12-06 13:20:50.618521] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.225 [2024-12-06 13:20:50.618526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.225 [2024-12-06 13:20:50.622184] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.225 [2024-12-06 13:20:50.622208] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.225 [2024-12-06 13:20:50.622215] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.622220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.225 [2024-12-06 13:20:50.622241] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.622292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.225 [2024-12-06 13:20:50.622312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.225 [2024-12-06 13:20:50.622322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.225 [2024-12-06 13:20:50.622338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.225 [2024-12-06 13:20:50.622352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.225 [2024-12-06 13:20:50.622362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.225 [2024-12-06 13:20:50.622371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.225 [2024-12-06 13:20:50.622379] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.225 [2024-12-06 13:20:50.622385] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.225 [2024-12-06 13:20:50.622390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.225 [2024-12-06 13:20:50.628390] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.225 [2024-12-06 13:20:50.628418] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.225 [2024-12-06 13:20:50.628425] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.628430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.225 [2024-12-06 13:20:50.628453] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.628509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.225 [2024-12-06 13:20:50.628529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.225 [2024-12-06 13:20:50.628540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.225 [2024-12-06 13:20:50.628556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.225 [2024-12-06 13:20:50.628570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.225 [2024-12-06 13:20:50.628579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.225 [2024-12-06 13:20:50.628589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.225 [2024-12-06 13:20:50.628597] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.225 [2024-12-06 13:20:50.628603] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.225 [2024-12-06 13:20:50.628608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.225 [2024-12-06 13:20:50.632251] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.225 [2024-12-06 13:20:50.632276] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.225 [2024-12-06 13:20:50.632282] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.632288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.225 [2024-12-06 13:20:50.632310] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.225 [2024-12-06 13:20:50.632362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.225 [2024-12-06 13:20:50.632382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.225 [2024-12-06 13:20:50.632393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.225 [2024-12-06 13:20:50.632408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.225 [2024-12-06 13:20:50.632423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.225 [2024-12-06 13:20:50.632432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.226 [2024-12-06 13:20:50.632442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.226 [2024-12-06 13:20:50.632450] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.226 [2024-12-06 13:20:50.632456] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.226 [2024-12-06 13:20:50.632461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.226 [2024-12-06 13:20:50.638464] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.226 [2024-12-06 13:20:50.638489] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.226 [2024-12-06 13:20:50.638495] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.638500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.226 [2024-12-06 13:20:50.638522] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.638573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.226 [2024-12-06 13:20:50.638593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.226 [2024-12-06 13:20:50.638603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.226 [2024-12-06 13:20:50.638619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.226 [2024-12-06 13:20:50.638633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.226 [2024-12-06 13:20:50.638642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.226 [2024-12-06 13:20:50.638651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.226 [2024-12-06 13:20:50.638660] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.226 [2024-12-06 13:20:50.638666] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.226 [2024-12-06 13:20:50.638671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.226 [2024-12-06 13:20:50.642321] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.226 [2024-12-06 13:20:50.642348] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.226 [2024-12-06 13:20:50.642355] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.642360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.226 [2024-12-06 13:20:50.642387] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.642441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.226 [2024-12-06 13:20:50.642460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.226 [2024-12-06 13:20:50.642471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.226 [2024-12-06 13:20:50.642487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.226 [2024-12-06 13:20:50.642501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.226 [2024-12-06 13:20:50.642510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.226 [2024-12-06 13:20:50.642520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.226 [2024-12-06 13:20:50.642528] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.226 [2024-12-06 13:20:50.642534] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.226 [2024-12-06 13:20:50.642539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.226 [2024-12-06 13:20:50.648538] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.226 [2024-12-06 13:20:50.648580] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.226 [2024-12-06 13:20:50.648587] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.648593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.226 [2024-12-06 13:20:50.648625] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.648710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.226 [2024-12-06 13:20:50.648733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.226 [2024-12-06 13:20:50.648745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.226 [2024-12-06 13:20:50.648763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.226 [2024-12-06 13:20:50.648779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.226 [2024-12-06 13:20:50.648788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.226 [2024-12-06 13:20:50.648799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.226 [2024-12-06 13:20:50.648808] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.226 [2024-12-06 13:20:50.648814] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.226 [2024-12-06 13:20:50.648820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.226 [2024-12-06 13:20:50.652397] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.226 [2024-12-06 13:20:50.652424] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.226 [2024-12-06 13:20:50.652431] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.652437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.226 [2024-12-06 13:20:50.652463] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.652520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.226 [2024-12-06 13:20:50.652540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.226 [2024-12-06 13:20:50.652550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.226 [2024-12-06 13:20:50.652566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.226 [2024-12-06 13:20:50.652592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.226 [2024-12-06 13:20:50.652603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.226 [2024-12-06 13:20:50.652612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.226 [2024-12-06 13:20:50.652621] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.226 [2024-12-06 13:20:50.652626] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.226 [2024-12-06 13:20:50.652631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.226 [2024-12-06 13:20:50.658635] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.226 [2024-12-06 13:20:50.658661] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.226 [2024-12-06 13:20:50.658668] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.658673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.226 [2024-12-06 13:20:50.658697] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.658749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.226 [2024-12-06 13:20:50.658769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.226 [2024-12-06 13:20:50.658780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.226 [2024-12-06 13:20:50.658796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.226 [2024-12-06 13:20:50.658810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.226 [2024-12-06 13:20:50.658820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.226 [2024-12-06 13:20:50.658829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.226 [2024-12-06 13:20:50.658838] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.226 [2024-12-06 13:20:50.658844] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.226 [2024-12-06 13:20:50.658849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.226 [2024-12-06 13:20:50.662473] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.226 [2024-12-06 13:20:50.662498] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.226 [2024-12-06 13:20:50.662505] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.662510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.226 [2024-12-06 13:20:50.662532] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.226 [2024-12-06 13:20:50.662583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.226 [2024-12-06 13:20:50.662603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.226 [2024-12-06 13:20:50.662613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.226 [2024-12-06 13:20:50.662629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.226 [2024-12-06 13:20:50.662654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.227 [2024-12-06 13:20:50.662664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.227 [2024-12-06 13:20:50.662674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.227 [2024-12-06 13:20:50.662682] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.227 [2024-12-06 13:20:50.662688] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.227 [2024-12-06 13:20:50.662693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.227 [2024-12-06 13:20:50.668707] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.227 [2024-12-06 13:20:50.668736] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.227 [2024-12-06 13:20:50.668743] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.668748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.227 [2024-12-06 13:20:50.668772] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.668838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 13:20:50.668858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.227 [2024-12-06 13:20:50.668869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.227 [2024-12-06 13:20:50.668885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.227 [2024-12-06 13:20:50.668900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.227 [2024-12-06 13:20:50.668909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.227 [2024-12-06 13:20:50.668919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.227 [2024-12-06 13:20:50.668927] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.227 [2024-12-06 13:20:50.668933] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.227 [2024-12-06 13:20:50.668938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.227 [2024-12-06 13:20:50.672543] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.227 [2024-12-06 13:20:50.672581] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.227 [2024-12-06 13:20:50.672588] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.672593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.227 [2024-12-06 13:20:50.672616] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.672670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 13:20:50.672689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.227 [2024-12-06 13:20:50.672700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.227 [2024-12-06 13:20:50.672727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.227 [2024-12-06 13:20:50.672743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.227 [2024-12-06 13:20:50.672752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.227 [2024-12-06 13:20:50.672762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.227 [2024-12-06 13:20:50.672770] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.227 [2024-12-06 13:20:50.672776] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.227 [2024-12-06 13:20:50.672781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.227 [2024-12-06 13:20:50.678783] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.227 [2024-12-06 13:20:50.678815] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.227 [2024-12-06 13:20:50.678821] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.678827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.227 [2024-12-06 13:20:50.678851] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.678910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 13:20:50.678931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.227 [2024-12-06 13:20:50.678954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.227 [2024-12-06 13:20:50.678974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.227 [2024-12-06 13:20:50.679007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.227 [2024-12-06 13:20:50.679018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.227 [2024-12-06 13:20:50.679028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.227 [2024-12-06 13:20:50.679036] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.227 [2024-12-06 13:20:50.679042] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.227 [2024-12-06 13:20:50.679047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.227 [2024-12-06 13:20:50.682626] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.227 [2024-12-06 13:20:50.682652] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.227 [2024-12-06 13:20:50.682659] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.682664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.227 [2024-12-06 13:20:50.682687] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.682749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 13:20:50.682770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.227 [2024-12-06 13:20:50.682780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.227 [2024-12-06 13:20:50.682796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.227 [2024-12-06 13:20:50.682810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.227 [2024-12-06 13:20:50.682819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.227 [2024-12-06 13:20:50.682829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.227 [2024-12-06 13:20:50.682837] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.227 [2024-12-06 13:20:50.682843] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.227 [2024-12-06 13:20:50.682848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.227 [2024-12-06 13:20:50.688861] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.227 [2024-12-06 13:20:50.688886] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.227 [2024-12-06 13:20:50.688893] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.688898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.227 [2024-12-06 13:20:50.688920] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.688985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 13:20:50.689006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.227 [2024-12-06 13:20:50.689017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.227 [2024-12-06 13:20:50.689042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.227 [2024-12-06 13:20:50.689075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.227 [2024-12-06 13:20:50.689086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.227 [2024-12-06 13:20:50.689095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.227 [2024-12-06 13:20:50.689104] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.227 [2024-12-06 13:20:50.689110] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.227 [2024-12-06 13:20:50.689115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.227 [2024-12-06 13:20:50.692697] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.227 [2024-12-06 13:20:50.692723] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.227 [2024-12-06 13:20:50.692729] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.227 [2024-12-06 13:20:50.692735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.227 [2024-12-06 13:20:50.692759] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.228 [2024-12-06 13:20:50.692810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 13:20:50.692830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.228 [2024-12-06 13:20:50.692841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.228 [2024-12-06 13:20:50.692856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.228 [2024-12-06 13:20:50.692870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.228 [2024-12-06 13:20:50.692879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.228 [2024-12-06 13:20:50.692889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.228 [2024-12-06 13:20:50.692897] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.228 [2024-12-06 13:20:50.692903] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.228 [2024-12-06 13:20:50.692908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.228 [2024-12-06 13:20:50.698930] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.228 [2024-12-06 13:20:50.698962] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.228 [2024-12-06 13:20:50.698969] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.228 [2024-12-06 13:20:50.698975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.228 [2024-12-06 13:20:50.698997] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.228 [2024-12-06 13:20:50.699047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 13:20:50.699067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.228 [2024-12-06 13:20:50.699078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.228 [2024-12-06 13:20:50.699093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.228 [2024-12-06 13:20:50.699124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.228 [2024-12-06 13:20:50.699135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.228 [2024-12-06 13:20:50.699145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.228 [2024-12-06 13:20:50.699153] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.228 [2024-12-06 13:20:50.699159] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.228 [2024-12-06 13:20:50.699178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.228 [2024-12-06 13:20:50.702769] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:26:44.228 [2024-12-06 13:20:50.702794] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:26:44.228 [2024-12-06 13:20:50.702801] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:26:44.228 [2024-12-06 13:20:50.702806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:26:44.228 [2024-12-06 13:20:50.702827] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:26:44.228 [2024-12-06 13:20:50.702877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 13:20:50.702897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe67f60 with addr=10.0.0.4, port=4420 00:26:44.228 [2024-12-06 13:20:50.702907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67f60 is same with the state(6) to be set 00:26:44.228 [2024-12-06 13:20:50.702923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67f60 (9): Bad file descriptor 00:26:44.228 [2024-12-06 13:20:50.702937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:26:44.228 [2024-12-06 13:20:50.702960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:26:44.228 [2024-12-06 13:20:50.702970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:26:44.228 [2024-12-06 13:20:50.702979] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:26:44.228 [2024-12-06 13:20:50.702984] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:26:44.228 [2024-12-06 13:20:50.702989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:26:44.228 [2024-12-06 13:20:50.709007] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.228 [2024-12-06 13:20:50.709031] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.228 [2024-12-06 13:20:50.709037] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.228 [2024-12-06 13:20:50.709043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.228 [2024-12-06 13:20:50.709065] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.228 [2024-12-06 13:20:50.709115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 13:20:50.709135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf2790 with addr=10.0.0.3, port=4420 00:26:44.228 [2024-12-06 13:20:50.709146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2790 is same with the state(6) to be set 00:26:44.228 [2024-12-06 13:20:50.709161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2790 (9): Bad file descriptor 00:26:44.228 [2024-12-06 13:20:50.709203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.228 [2024-12-06 13:20:50.709215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.228 [2024-12-06 13:20:50.709224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.228 [2024-12-06 13:20:50.709233] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.228 [2024-12-06 13:20:50.709239] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.228 [2024-12-06 13:20:50.709244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.228 [2024-12-06 13:20:50.710195] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:26:44.228 [2024-12-06 13:20:50.710218] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:44.228 [2024-12-06 13:20:50.710243] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:44.228 [2024-12-06 13:20:50.711225] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:26:44.228 [2024-12-06 13:20:50.711256] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:26:44.228 [2024-12-06 13:20:50.711275] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:26:44.487 [2024-12-06 13:20:50.796313] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:44.487 [2024-12-06 13:20:50.797307] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.423 13:20:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:26:45.423 [2024-12-06 13:20:51.877555] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:26:46.359 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:26:46.359 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:46.359 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:26:46.359 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.359 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.359 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:26:46.359 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.617 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:46.618 13:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.618 [2024-12-06 13:20:53.101758] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:26:46.618 2024/12/06 13:20:53 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:26:46.618 request: 00:26:46.618 { 00:26:46.618 "method": "bdev_nvme_start_mdns_discovery", 00:26:46.618 "params": { 00:26:46.618 "name": "mdns", 00:26:46.618 "svcname": "_nvme-disc._http", 00:26:46.618 "hostnqn": "nqn.2021-12.io.spdk:test" 00:26:46.618 } 00:26:46.618 } 00:26:46.618 Got JSON-RPC error response 00:26:46.618 GoRPCClient: error on JSON-RPC call 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:46.618 13:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:26:47.218 [2024-12-06 13:20:53.690464] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:47.477 [2024-12-06 13:20:53.790450] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:47.477 [2024-12-06 13:20:53.890455] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:47.477 [2024-12-06 13:20:53.890495] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:26:47.477 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:47.477 cookie is 0 00:26:47.477 is_local: 1 00:26:47.477 our_own: 0 00:26:47.477 wide_area: 0 00:26:47.477 multicast: 1 00:26:47.477 cached: 1 00:26:47.477 [2024-12-06 13:20:53.990459] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:47.477 [2024-12-06 13:20:53.990502] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:26:47.477 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:47.477 cookie is 0 00:26:47.477 is_local: 1 00:26:47.477 our_own: 0 00:26:47.477 wide_area: 0 00:26:47.477 multicast: 1 00:26:47.477 cached: 1 00:26:47.477 [2024-12-06 13:20:53.990517] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:26:47.735 [2024-12-06 13:20:54.090453] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:47.735 [2024-12-06 13:20:54.090494] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:26:47.735 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:47.735 cookie is 0 00:26:47.735 is_local: 1 00:26:47.735 our_own: 0 00:26:47.735 wide_area: 0 00:26:47.735 multicast: 1 00:26:47.735 cached: 1 00:26:47.735 [2024-12-06 13:20:54.190470] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:47.735 [2024-12-06 13:20:54.190518] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:26:47.735 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:47.735 cookie is 0 00:26:47.735 is_local: 1 00:26:47.735 our_own: 0 00:26:47.735 wide_area: 0 00:26:47.735 multicast: 1 00:26:47.735 cached: 1 00:26:47.735 [2024-12-06 13:20:54.190535] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:26:48.672 [2024-12-06 13:20:54.899663] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:26:48.672 [2024-12-06 13:20:54.899704] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:26:48.672 [2024-12-06 13:20:54.899726] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:26:48.672 [2024-12-06 13:20:54.985780] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:26:48.672 [2024-12-06 13:20:55.044255] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:26:48.672 [2024-12-06 13:20:55.044847] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0xfb55f0:1 started. 00:26:48.672 [2024-12-06 13:20:55.046343] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:26:48.672 [2024-12-06 13:20:55.046374] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:26:48.673 [2024-12-06 13:20:55.048432] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0xfb55f0 was disconnected and freed. delete nvme_qpair. 00:26:48.673 [2024-12-06 13:20:55.099627] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:48.673 [2024-12-06 13:20:55.099664] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:48.673 [2024-12-06 13:20:55.099684] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:48.673 [2024-12-06 13:20:55.185781] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:26:48.931 [2024-12-06 13:20:55.244249] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:26:48.931 [2024-12-06 13:20:55.244929] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xe56ed0:1 started. 00:26:48.931 [2024-12-06 13:20:55.246457] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:26:48.931 [2024-12-06 13:20:55.246492] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:48.931 [2024-12-06 13:20:55.248544] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xe56ed0 was disconnected and freed. delete nvme_qpair. 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 [2024-12-06 13:20:58.284450] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:26:52.276 2024/12/06 13:20:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:26:52.276 request: 00:26:52.276 { 00:26:52.276 "method": "bdev_nvme_start_mdns_discovery", 00:26:52.276 "params": { 00:26:52.276 "name": "cdc", 00:26:52.276 "svcname": "_nvme-disc._tcp", 00:26:52.276 "hostnqn": "nqn.2021-12.io.spdk:test" 00:26:52.276 } 00:26:52.276 } 00:26:52.276 Got JSON-RPC error response 00:26:52.276 GoRPCClient: error on JSON-RPC call 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:26:52.276 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:52.276 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:26:52.276 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:52.276 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:52.276 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:52.276 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:52.276 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.276 13:20:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:26:52.276 [2024-12-06 13:20:58.490437] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:53.209 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:26:53.209 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:53.209 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:26:53.209 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95151 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95151 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95169 00:26:53.210 Got SIGTERM, quitting. 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:26:53.210 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:26:53.210 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:26:53.210 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:26:53.210 avahi-daemon 0.8 exiting. 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.210 rmmod nvme_tcp 00:26:53.210 rmmod nvme_fabrics 00:26:53.210 rmmod nvme_keyring 00:26:53.210 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.468 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:26:53.468 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:26:53.468 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 95116 ']' 00:26:53.468 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 95116 00:26:53.468 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 95116 ']' 00:26:53.468 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 95116 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95116 00:26:53.469 killing process with pid 95116 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95116' 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 95116 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 95116 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:53.469 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:53.727 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:53.727 13:20:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:26:53.727 00:26:53.727 real 0m21.545s 00:26:53.727 user 0m42.157s 00:26:53.727 sys 0m2.059s 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.727 ************************************ 00:26:53.727 END TEST nvmf_mdns_discovery 00:26:53.727 ************************************ 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.727 ************************************ 00:26:53.727 START TEST nvmf_host_multipath 00:26:53.727 ************************************ 00:26:53.727 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:53.986 * Looking for test storage... 00:26:53.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:26:53.986 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.987 --rc genhtml_branch_coverage=1 00:26:53.987 --rc genhtml_function_coverage=1 00:26:53.987 --rc genhtml_legend=1 00:26:53.987 --rc geninfo_all_blocks=1 00:26:53.987 --rc geninfo_unexecuted_blocks=1 00:26:53.987 00:26:53.987 ' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.987 --rc genhtml_branch_coverage=1 00:26:53.987 --rc genhtml_function_coverage=1 00:26:53.987 --rc genhtml_legend=1 00:26:53.987 --rc geninfo_all_blocks=1 00:26:53.987 --rc geninfo_unexecuted_blocks=1 00:26:53.987 00:26:53.987 ' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.987 --rc genhtml_branch_coverage=1 00:26:53.987 --rc genhtml_function_coverage=1 00:26:53.987 --rc genhtml_legend=1 00:26:53.987 --rc geninfo_all_blocks=1 00:26:53.987 --rc geninfo_unexecuted_blocks=1 00:26:53.987 00:26:53.987 ' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.987 --rc genhtml_branch_coverage=1 00:26:53.987 --rc genhtml_function_coverage=1 00:26:53.987 --rc genhtml_legend=1 00:26:53.987 --rc geninfo_all_blocks=1 00:26:53.987 --rc geninfo_unexecuted_blocks=1 00:26:53.987 00:26:53.987 ' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:53.987 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:53.987 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:53.988 Cannot find device "nvmf_init_br" 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:53.988 Cannot find device "nvmf_init_br2" 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:53.988 Cannot find device "nvmf_tgt_br" 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:53.988 Cannot find device "nvmf_tgt_br2" 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:53.988 Cannot find device "nvmf_init_br" 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:53.988 Cannot find device "nvmf_init_br2" 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:53.988 Cannot find device "nvmf_tgt_br" 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:26:53.988 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:53.988 Cannot find device "nvmf_tgt_br2" 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:54.246 Cannot find device "nvmf_br" 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:54.246 Cannot find device "nvmf_init_if" 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:54.246 Cannot find device "nvmf_init_if2" 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:54.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:54.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:54.246 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:54.247 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:54.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:54.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:26:54.505 00:26:54.505 --- 10.0.0.3 ping statistics --- 00:26:54.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.505 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:54.505 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:54.505 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:26:54.505 00:26:54.505 --- 10.0.0.4 ping statistics --- 00:26:54.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.505 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:54.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:54.505 00:26:54.505 --- 10.0.0.1 ping statistics --- 00:26:54.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.505 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:54.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:26:54.505 00:26:54.505 --- 10.0.0.2 ping statistics --- 00:26:54.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.505 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95814 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95814 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95814 ']' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.505 13:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:54.505 [2024-12-06 13:21:00.890046] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:54.505 [2024-12-06 13:21:00.890139] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.763 [2024-12-06 13:21:01.033848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:54.763 [2024-12-06 13:21:01.066985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.763 [2024-12-06 13:21:01.067045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.763 [2024-12-06 13:21:01.067057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.763 [2024-12-06 13:21:01.067066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.763 [2024-12-06 13:21:01.067073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.763 [2024-12-06 13:21:01.067824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.763 [2024-12-06 13:21:01.067838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95814 00:26:54.763 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:55.021 [2024-12-06 13:21:01.504141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.021 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:55.587 Malloc0 00:26:55.587 13:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:55.844 13:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.101 13:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:56.359 [2024-12-06 13:21:02.758473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:56.359 13:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:56.617 [2024-12-06 13:21:03.086672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95904 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95904 /var/tmp/bdevperf.sock 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95904 ']' 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.617 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:57.184 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.184 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:26:57.184 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:57.443 13:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:58.010 Nvme0n1 00:26:58.010 13:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:58.268 Nvme0n1 00:26:58.268 13:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:26:58.268 13:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:59.204 13:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:26:59.204 13:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:59.461 13:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:00.028 13:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:00.028 13:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95983 00:27:00.028 13:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95814 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:00.028 13:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:06.646 Attaching 4 probes... 00:27:06.646 @path[10.0.0.3, 4421]: 16794 00:27:06.646 @path[10.0.0.3, 4421]: 16915 00:27:06.646 @path[10.0.0.3, 4421]: 17367 00:27:06.646 @path[10.0.0.3, 4421]: 16968 00:27:06.646 @path[10.0.0.3, 4421]: 17112 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95983 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:06.646 13:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:06.904 13:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:06.904 13:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96116 00:27:06.904 13:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95814 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:06.904 13:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:13.467 Attaching 4 probes... 00:27:13.467 @path[10.0.0.3, 4420]: 16461 00:27:13.467 @path[10.0.0.3, 4420]: 16811 00:27:13.467 @path[10.0.0.3, 4420]: 16989 00:27:13.467 @path[10.0.0.3, 4420]: 16940 00:27:13.467 @path[10.0.0.3, 4420]: 16874 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96116 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:13.467 13:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:13.725 13:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:13.725 13:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96252 00:27:13.725 13:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95814 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:13.725 13:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:20.286 Attaching 4 probes... 00:27:20.286 @path[10.0.0.3, 4421]: 14985 00:27:20.286 @path[10.0.0.3, 4421]: 16986 00:27:20.286 @path[10.0.0.3, 4421]: 16867 00:27:20.286 @path[10.0.0.3, 4421]: 16918 00:27:20.286 @path[10.0.0.3, 4421]: 16906 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96252 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:20.286 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:20.545 13:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:20.803 13:21:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:20.803 13:21:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95814 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:20.803 13:21:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96383 00:27:20.803 13:21:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:27.362 Attaching 4 probes... 00:27:27.362 00:27:27.362 00:27:27.362 00:27:27.362 00:27:27.362 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96383 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:27.362 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:27.363 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:27.363 13:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:27.927 13:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:27.927 13:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96513 00:27:27.927 13:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95814 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:27.927 13:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:34.485 Attaching 4 probes... 00:27:34.485 @path[10.0.0.3, 4421]: 14606 00:27:34.485 @path[10.0.0.3, 4421]: 15420 00:27:34.485 @path[10.0.0.3, 4421]: 16425 00:27:34.485 @path[10.0.0.3, 4421]: 16231 00:27:34.485 @path[10.0.0.3, 4421]: 16372 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96513 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:34.485 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:34.485 [2024-12-06 13:21:40.860810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.860993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.485 [2024-12-06 13:21:40.861091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.486 [2024-12-06 13:21:40.861099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.486 [2024-12-06 13:21:40.861107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.486 [2024-12-06 13:21:40.861115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.486 [2024-12-06 13:21:40.861123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edceb0 is same with the state(6) to be set 00:27:34.486 13:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:27:35.421 13:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:27:35.421 13:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96649 00:27:35.421 13:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95814 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:35.421 13:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:42.073 13:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:42.073 13:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:42.073 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:42.073 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:42.073 Attaching 4 probes... 00:27:42.074 @path[10.0.0.3, 4420]: 15273 00:27:42.074 @path[10.0.0.3, 4420]: 16385 00:27:42.074 @path[10.0.0.3, 4420]: 16011 00:27:42.074 @path[10.0.0.3, 4420]: 16325 00:27:42.074 @path[10.0.0.3, 4420]: 16324 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96649 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:42.074 [2024-12-06 13:21:48.560257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:42.074 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:42.639 13:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:27:49.248 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:27:49.248 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96842 00:27:49.248 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95814 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:49.248 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:54.511 13:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:54.511 13:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:55.086 Attaching 4 probes... 00:27:55.086 @path[10.0.0.3, 4421]: 11181 00:27:55.086 @path[10.0.0.3, 4421]: 8809 00:27:55.086 @path[10.0.0.3, 4421]: 14680 00:27:55.086 @path[10.0.0.3, 4421]: 14467 00:27:55.086 @path[10.0.0.3, 4421]: 13881 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96842 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95904 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95904 ']' 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95904 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95904 00:27:55.086 killing process with pid 95904 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95904' 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95904 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95904 00:27:55.086 { 00:27:55.086 "results": [ 00:27:55.086 { 00:27:55.086 "job": "Nvme0n1", 00:27:55.086 "core_mask": "0x4", 00:27:55.086 "workload": "verify", 00:27:55.086 "status": "terminated", 00:27:55.086 "verify_range": { 00:27:55.086 "start": 0, 00:27:55.086 "length": 16384 00:27:55.086 }, 00:27:55.086 "queue_depth": 128, 00:27:55.086 "io_size": 4096, 00:27:55.086 "runtime": 56.584377, 00:27:55.086 "iops": 6762.431969516957, 00:27:55.086 "mibps": 26.415749880925613, 00:27:55.086 "io_failed": 0, 00:27:55.086 "io_timeout": 0, 00:27:55.086 "avg_latency_us": 18895.713219384157, 00:27:55.086 "min_latency_us": 476.62545454545455, 00:27:55.086 "max_latency_us": 7046430.72 00:27:55.086 } 00:27:55.086 ], 00:27:55.086 "core_count": 1 00:27:55.086 } 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95904 00:27:55.086 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:55.086 [2024-12-06 13:21:03.171634] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:55.086 [2024-12-06 13:21:03.171762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95904 ] 00:27:55.086 [2024-12-06 13:21:03.322666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.086 [2024-12-06 13:21:03.362087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.086 Running I/O for 90 seconds... 00:27:55.086 8775.00 IOPS, 34.28 MiB/s [2024-12-06T13:22:01.614Z] 8788.50 IOPS, 34.33 MiB/s [2024-12-06T13:22:01.614Z] 8717.00 IOPS, 34.05 MiB/s [2024-12-06T13:22:01.614Z] 8659.75 IOPS, 33.83 MiB/s [2024-12-06T13:22:01.614Z] 8660.80 IOPS, 33.83 MiB/s [2024-12-06T13:22:01.614Z] 8634.83 IOPS, 33.73 MiB/s [2024-12-06T13:22:01.614Z] 8625.71 IOPS, 33.69 MiB/s [2024-12-06T13:22:01.614Z] 8622.75 IOPS, 33.68 MiB/s [2024-12-06T13:22:01.614Z] [2024-12-06 13:21:13.177829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.086 [2024-12-06 13:21:13.177900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:55.086 [2024-12-06 13:21:13.177976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.086 [2024-12-06 13:21:13.178000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:55.086 [2024-12-06 13:21:13.178025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.086 [2024-12-06 13:21:13.178042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:55.086 [2024-12-06 13:21:13.178066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.086 [2024-12-06 13:21:13.178082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:55.086 [2024-12-06 13:21:13.178105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.086 [2024-12-06 13:21:13.178121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:55.086 [2024-12-06 13:21:13.178143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.086 [2024-12-06 13:21:13.178160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:55.086 [2024-12-06 13:21:13.178194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.178211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.178233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.178250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.180679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.180720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.180751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.180769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.180818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.180836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.180859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.180876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.180898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.180914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.180937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.180968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.180992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.181962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.181983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:55.087 [2024-12-06 13:21:13.182459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.087 [2024-12-06 13:21:13.182475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:13.182973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.182997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:13.183920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.088 [2024-12-06 13:21:13.183937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:55.088 8591.89 IOPS, 33.56 MiB/s [2024-12-06T13:22:01.616Z] 8572.40 IOPS, 33.49 MiB/s [2024-12-06T13:22:01.616Z] 8557.27 IOPS, 33.43 MiB/s [2024-12-06T13:22:01.616Z] 8551.92 IOPS, 33.41 MiB/s [2024-12-06T13:22:01.616Z] 8546.54 IOPS, 33.38 MiB/s [2024-12-06T13:22:01.616Z] 8539.79 IOPS, 33.36 MiB/s [2024-12-06T13:22:01.616Z] 8538.07 IOPS, 33.35 MiB/s [2024-12-06T13:22:01.616Z] [2024-12-06 13:21:19.874546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:19.874630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:55.088 [2024-12-06 13:21:19.874694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.088 [2024-12-06 13:21:19.874718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.874742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.874788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.874814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.874831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.874854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.874871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.874894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.874910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.874932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.874966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.874990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.875966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.875986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:55.089 [2024-12-06 13:21:19.876681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.089 [2024-12-06 13:21:19.876698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.876741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.876759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.876785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.876802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.876827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.876844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.876869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.876887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.876912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.876928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.876969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.876989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.877031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.877079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.877121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.877163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.877974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.877991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.878016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.878033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.878058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.090 [2024-12-06 13:21:19.878075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.878099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.878116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.878141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.090 [2024-12-06 13:21:19.878158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.090 [2024-12-06 13:21:19.878183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.878969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.878988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.879971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.879991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.880020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.091 [2024-12-06 13:21:19.880037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:55.091 [2024-12-06 13:21:19.880067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:19.880612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.880657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.880704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.880750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.880796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.880849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.880897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.880960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:19.880993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:19.881010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:55.092 8051.88 IOPS, 31.45 MiB/s [2024-12-06T13:22:01.620Z] 8034.06 IOPS, 31.38 MiB/s [2024-12-06T13:22:01.620Z] 8059.28 IOPS, 31.48 MiB/s [2024-12-06T13:22:01.620Z] 8081.16 IOPS, 31.57 MiB/s [2024-12-06T13:22:01.620Z] 8099.60 IOPS, 31.64 MiB/s [2024-12-06T13:22:01.620Z] 8117.19 IOPS, 31.71 MiB/s [2024-12-06T13:22:01.620Z] 8091.09 IOPS, 31.61 MiB/s [2024-12-06T13:22:01.620Z] [2024-12-06 13:21:27.138604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.138672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.138733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.138755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.138780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.138796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.138819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.138835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.138857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.138873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.138911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.138933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.138963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.138988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.139004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.139364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.092 [2024-12-06 13:21:27.139412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:55.092 [2024-12-06 13:21:27.139715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.092 [2024-12-06 13:21:27.139734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.139757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.139773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.139797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.139814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.139837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.139867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.139893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.139910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.139935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.139968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.139994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.140933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.140983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.093 [2024-12-06 13:21:27.141519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:55.093 [2024-12-06 13:21:27.141542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.141970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.141989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.094 [2024-12-06 13:21:27.142438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.142957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.142984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.094 [2024-12-06 13:21:27.143001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:55.094 [2024-12-06 13:21:27.143026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.143957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.143991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:55.095 [2024-12-06 13:21:27.144829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.095 [2024-12-06 13:21:27.144853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.144884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.144901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.144930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.144959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.144990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.145008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.145037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.145054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.145082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.145099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.145129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.145146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.145175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.145191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.145220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.145237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:27.145266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.096 [2024-12-06 13:21:27.145283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:55.096 7864.22 IOPS, 30.72 MiB/s [2024-12-06T13:22:01.624Z] 7536.54 IOPS, 29.44 MiB/s [2024-12-06T13:22:01.624Z] 7235.08 IOPS, 28.26 MiB/s [2024-12-06T13:22:01.624Z] 6956.81 IOPS, 27.18 MiB/s [2024-12-06T13:22:01.624Z] 6699.15 IOPS, 26.17 MiB/s [2024-12-06T13:22:01.624Z] 6459.89 IOPS, 25.23 MiB/s [2024-12-06T13:22:01.624Z] 6237.14 IOPS, 24.36 MiB/s [2024-12-06T13:22:01.624Z] 6187.50 IOPS, 24.17 MiB/s [2024-12-06T13:22:01.624Z] 6230.42 IOPS, 24.34 MiB/s [2024-12-06T13:22:01.624Z] 6277.22 IOPS, 24.52 MiB/s [2024-12-06T13:22:01.624Z] 6336.18 IOPS, 24.75 MiB/s [2024-12-06T13:22:01.624Z] 6389.53 IOPS, 24.96 MiB/s [2024-12-06T13:22:01.624Z] 6439.89 IOPS, 25.16 MiB/s [2024-12-06T13:22:01.624Z] 6482.97 IOPS, 25.32 MiB/s [2024-12-06T13:22:01.624Z] [2024-12-06 13:21:40.863073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.096 [2024-12-06 13:21:40.863868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.096 [2024-12-06 13:21:40.863885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.863899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.863915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.863930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.863959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.863977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.863993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.864016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.864048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.864078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.864109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.864139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.864170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.097 [2024-12-06 13:21:40.864201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.864971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.864988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.865002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.865018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.865033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.865049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.865064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.865080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.865094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.865111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.865125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.097 [2024-12-06 13:21:40.865142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.097 [2024-12-06 13:21:40.865156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.865974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.865998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.866013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.866044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.866086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.866117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.866148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.866179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.098 [2024-12-06 13:21:40.866210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.098 [2024-12-06 13:21:40.866264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37576 len:8 PRP1 0x0 PRP2 0x0 00:27:55.098 [2024-12-06 13:21:40.866278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.098 [2024-12-06 13:21:40.866308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.098 [2024-12-06 13:21:40.866319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37584 len:8 PRP1 0x0 PRP2 0x0 00:27:55.098 [2024-12-06 13:21:40.866333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.098 [2024-12-06 13:21:40.866358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.098 [2024-12-06 13:21:40.866369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37592 len:8 PRP1 0x0 PRP2 0x0 00:27:55.098 [2024-12-06 13:21:40.866383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.098 [2024-12-06 13:21:40.866397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.098 [2024-12-06 13:21:40.866407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.098 [2024-12-06 13:21:40.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37600 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37608 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37616 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37624 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37632 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37640 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37648 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37656 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37664 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37672 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.866956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37680 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.866974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.866988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.866999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37688 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37696 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37704 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37712 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37720 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37728 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37736 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37744 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37752 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37760 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.099 [2024-12-06 13:21:40.867534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.099 [2024-12-06 13:21:40.867544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37768 len:8 PRP1 0x0 PRP2 0x0 00:27:55.099 [2024-12-06 13:21:40.867558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.099 [2024-12-06 13:21:40.867573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37776 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.867625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37784 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.867676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37792 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.867732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37800 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.867783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37808 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.867832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37816 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.867882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37824 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.867931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:55.100 [2024-12-06 13:21:40.867963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:55.100 [2024-12-06 13:21:40.867976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37832 len:8 PRP1 0x0 PRP2 0x0 00:27:55.100 [2024-12-06 13:21:40.867990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.868140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.100 [2024-12-06 13:21:40.868168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.868188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.100 [2024-12-06 13:21:40.868202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.868217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.100 [2024-12-06 13:21:40.868231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.868246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.100 [2024-12-06 13:21:40.868262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.100 [2024-12-06 13:21:40.868276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x767d90 is same with the state(6) to be set 00:27:55.100 [2024-12-06 13:21:40.869706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.100 [2024-12-06 13:21:40.869748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767d90 (9): Bad file descriptor 00:27:55.100 [2024-12-06 13:21:40.869874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.100 [2024-12-06 13:21:40.869905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x767d90 with addr=10.0.0.3, port=4421 00:27:55.100 [2024-12-06 13:21:40.869925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x767d90 is same with the state(6) to be set 00:27:55.100 [2024-12-06 13:21:40.869987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767d90 (9): Bad file descriptor 00:27:55.100 [2024-12-06 13:21:40.870020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.100 [2024-12-06 13:21:40.870036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.100 [2024-12-06 13:21:40.870053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.100 [2024-12-06 13:21:40.870067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.100 [2024-12-06 13:21:40.870083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.100 6515.08 IOPS, 25.45 MiB/s [2024-12-06T13:22:01.628Z] 6546.87 IOPS, 25.57 MiB/s [2024-12-06T13:22:01.628Z] 6589.82 IOPS, 25.74 MiB/s [2024-12-06T13:22:01.628Z] 6631.18 IOPS, 25.90 MiB/s [2024-12-06T13:22:01.628Z] 6664.59 IOPS, 26.03 MiB/s [2024-12-06T13:22:01.628Z] 6699.81 IOPS, 26.17 MiB/s [2024-12-06T13:22:01.628Z] 6732.84 IOPS, 26.30 MiB/s [2024-12-06T13:22:01.628Z] 6764.66 IOPS, 26.42 MiB/s [2024-12-06T13:22:01.628Z] 6791.73 IOPS, 26.53 MiB/s [2024-12-06T13:22:01.628Z] 6818.43 IOPS, 26.63 MiB/s [2024-12-06T13:22:01.628Z] [2024-12-06 13:21:50.950264] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:55.100 6846.68 IOPS, 26.74 MiB/s [2024-12-06T13:22:01.628Z] 6874.04 IOPS, 26.85 MiB/s [2024-12-06T13:22:01.628Z] 6860.86 IOPS, 26.80 MiB/s [2024-12-06T13:22:01.628Z] 6849.48 IOPS, 26.76 MiB/s [2024-12-06T13:22:01.628Z] 6779.04 IOPS, 26.48 MiB/s [2024-12-06T13:22:01.628Z] 6756.23 IOPS, 26.39 MiB/s [2024-12-06T13:22:01.628Z] 6741.30 IOPS, 26.33 MiB/s [2024-12-06T13:22:01.628Z] 6751.70 IOPS, 26.37 MiB/s [2024-12-06T13:22:01.628Z] 6760.31 IOPS, 26.41 MiB/s [2024-12-06T13:22:01.628Z] 6765.77 IOPS, 26.43 MiB/s [2024-12-06T13:22:01.628Z] Received shutdown signal, test time was about 56.585264 seconds 00:27:55.100 00:27:55.100 Latency(us) 00:27:55.100 [2024-12-06T13:22:01.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.100 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:55.100 Verification LBA range: start 0x0 length 0x4000 00:27:55.100 Nvme0n1 : 56.58 6762.43 26.42 0.00 0.00 18895.71 476.63 7046430.72 00:27:55.100 [2024-12-06T13:22:01.628Z] =================================================================================================================== 00:27:55.100 [2024-12-06T13:22:01.628Z] Total : 6762.43 26.42 0.00 0.00 18895.71 476.63 7046430.72 00:27:55.100 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.667 rmmod nvme_tcp 00:27:55.667 rmmod nvme_fabrics 00:27:55.667 rmmod nvme_keyring 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95814 ']' 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95814 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95814 ']' 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95814 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.667 13:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95814 00:27:55.667 killing process with pid 95814 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95814' 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95814 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95814 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:55.667 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:27:55.925 00:27:55.925 real 1m2.190s 00:27:55.925 user 2m57.975s 00:27:55.925 sys 0m13.377s 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:55.925 ************************************ 00:27:55.925 END TEST nvmf_host_multipath 00:27:55.925 ************************************ 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.925 ************************************ 00:27:55.925 START TEST nvmf_timeout 00:27:55.925 ************************************ 00:27:55.925 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:56.185 * Looking for test storage... 00:27:56.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.185 --rc genhtml_branch_coverage=1 00:27:56.185 --rc genhtml_function_coverage=1 00:27:56.185 --rc genhtml_legend=1 00:27:56.185 --rc geninfo_all_blocks=1 00:27:56.185 --rc geninfo_unexecuted_blocks=1 00:27:56.185 00:27:56.185 ' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.185 --rc genhtml_branch_coverage=1 00:27:56.185 --rc genhtml_function_coverage=1 00:27:56.185 --rc genhtml_legend=1 00:27:56.185 --rc geninfo_all_blocks=1 00:27:56.185 --rc geninfo_unexecuted_blocks=1 00:27:56.185 00:27:56.185 ' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.185 --rc genhtml_branch_coverage=1 00:27:56.185 --rc genhtml_function_coverage=1 00:27:56.185 --rc genhtml_legend=1 00:27:56.185 --rc geninfo_all_blocks=1 00:27:56.185 --rc geninfo_unexecuted_blocks=1 00:27:56.185 00:27:56.185 ' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.185 --rc genhtml_branch_coverage=1 00:27:56.185 --rc genhtml_function_coverage=1 00:27:56.185 --rc genhtml_legend=1 00:27:56.185 --rc geninfo_all_blocks=1 00:27:56.185 --rc geninfo_unexecuted_blocks=1 00:27:56.185 00:27:56.185 ' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:56.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.185 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:56.186 Cannot find device "nvmf_init_br" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:56.186 Cannot find device "nvmf_init_br2" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:56.186 Cannot find device "nvmf_tgt_br" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:56.186 Cannot find device "nvmf_tgt_br2" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:56.186 Cannot find device "nvmf_init_br" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:56.186 Cannot find device "nvmf_init_br2" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:56.186 Cannot find device "nvmf_tgt_br" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:56.186 Cannot find device "nvmf_tgt_br2" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:56.186 Cannot find device "nvmf_br" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:56.186 Cannot find device "nvmf_init_if" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:56.186 Cannot find device "nvmf_init_if2" 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:56.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:27:56.186 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:56.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:56.444 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:56.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:56.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:27:56.703 00:27:56.703 --- 10.0.0.3 ping statistics --- 00:27:56.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.703 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:56.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:56.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:27:56.703 00:27:56.703 --- 10.0.0.4 ping statistics --- 00:27:56.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.703 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:56.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:27:56.703 00:27:56.703 --- 10.0.0.1 ping statistics --- 00:27:56.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.703 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:56.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:27:56.703 00:27:56.703 --- 10.0.0.2 ping statistics --- 00:27:56.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.703 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.703 13:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=97213 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 97213 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97213 ']' 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.703 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:56.703 [2024-12-06 13:22:03.072426] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:56.703 [2024-12-06 13:22:03.072510] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.703 [2024-12-06 13:22:03.216391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:56.962 [2024-12-06 13:22:03.263111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.963 [2024-12-06 13:22:03.263195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.963 [2024-12-06 13:22:03.263216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.963 [2024-12-06 13:22:03.263229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.963 [2024-12-06 13:22:03.263240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.963 [2024-12-06 13:22:03.264149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.963 [2024-12-06 13:22:03.264170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:56.963 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:57.529 [2024-12-06 13:22:03.784290] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.529 13:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:57.787 Malloc0 00:27:57.787 13:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.045 13:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.304 13:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:58.562 [2024-12-06 13:22:05.068025] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97296 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97296 /var/tmp/bdevperf.sock 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97296 ']' 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:58.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.821 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:58.821 [2024-12-06 13:22:05.135788] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:58.821 [2024-12-06 13:22:05.135892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97296 ] 00:27:58.821 [2024-12-06 13:22:05.311855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.079 [2024-12-06 13:22:05.353067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.079 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.079 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:59.079 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:59.337 13:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:59.903 NVMe0n1 00:27:59.903 13:22:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97330 00:27:59.903 13:22:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:59.903 13:22:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:27:59.903 Running I/O for 10 seconds... 00:28:00.842 13:22:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:01.106 8357.00 IOPS, 32.64 MiB/s [2024-12-06T13:22:07.634Z] [2024-12-06 13:22:07.496962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.106 [2024-12-06 13:22:07.497160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0550 is same with the state(6) to be set 00:28:01.107 [2024-12-06 13:22:07.497720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.497789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.497813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.497835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.497868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.497897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.497918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.497961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.497982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.107 [2024-12-06 13:22:07.498333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.107 [2024-12-06 13:22:07.498349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.108 [2024-12-06 13:22:07.498359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.108 [2024-12-06 13:22:07.498380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.108 [2024-12-06 13:22:07.498401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.498978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.498990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.108 [2024-12-06 13:22:07.499263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.108 [2024-12-06 13:22:07.499280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.499984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.499995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.500005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.500017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.500027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.500038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.500050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.500072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.109 [2024-12-06 13:22:07.500090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.500109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.109 [2024-12-06 13:22:07.500121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.109 [2024-12-06 13:22:07.500133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.109 [2024-12-06 13:22:07.500142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.110 [2024-12-06 13:22:07.500930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.110 [2024-12-06 13:22:07.500954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.500987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.111 [2024-12-06 13:22:07.501413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdae2c0 is same with the state(6) to be set 00:28:01.111 [2024-12-06 13:22:07.501438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:01.111 [2024-12-06 13:22:07.501447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:01.111 [2024-12-06 13:22:07.501456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75000 len:8 PRP1 0x0 PRP2 0x0 00:28:01.111 [2024-12-06 13:22:07.501465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.111 [2024-12-06 13:22:07.501807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:01.111 [2024-12-06 13:22:07.501912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd42f30 (9): Bad file descriptor 00:28:01.111 [2024-12-06 13:22:07.502074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.111 [2024-12-06 13:22:07.502114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42f30 with addr=10.0.0.3, port=4420 00:28:01.111 [2024-12-06 13:22:07.502128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd42f30 is same with the state(6) to be set 00:28:01.111 [2024-12-06 13:22:07.502150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd42f30 (9): Bad file descriptor 00:28:01.111 [2024-12-06 13:22:07.502167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:01.111 [2024-12-06 13:22:07.502177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:01.111 [2024-12-06 13:22:07.502188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:01.111 [2024-12-06 13:22:07.502200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:01.111 [2024-12-06 13:22:07.502211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:01.111 13:22:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:02.982 4652.50 IOPS, 18.17 MiB/s [2024-12-06T13:22:09.510Z] 3101.67 IOPS, 12.12 MiB/s [2024-12-06T13:22:09.510Z] [2024-12-06 13:22:09.502395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.982 [2024-12-06 13:22:09.502474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42f30 with addr=10.0.0.3, port=4420 00:28:02.983 [2024-12-06 13:22:09.502493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd42f30 is same with the state(6) to be set 00:28:02.983 [2024-12-06 13:22:09.502523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd42f30 (9): Bad file descriptor 00:28:02.983 [2024-12-06 13:22:09.502543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:02.983 [2024-12-06 13:22:09.502554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:02.983 [2024-12-06 13:22:09.502566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:02.983 [2024-12-06 13:22:09.502578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:02.983 [2024-12-06 13:22:09.502590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:03.241 13:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:03.241 13:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.241 13:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:03.498 13:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:03.498 13:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:03.498 13:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:03.498 13:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:03.756 13:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:03.756 13:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:04.947 2326.25 IOPS, 9.09 MiB/s [2024-12-06T13:22:11.732Z] 1861.00 IOPS, 7.27 MiB/s [2024-12-06T13:22:11.732Z] [2024-12-06 13:22:11.502829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-12-06 13:22:11.502913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42f30 with addr=10.0.0.3, port=4420 00:28:05.204 [2024-12-06 13:22:11.502932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd42f30 is same with the state(6) to be set 00:28:05.204 [2024-12-06 13:22:11.502976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd42f30 (9): Bad file descriptor 00:28:05.204 [2024-12-06 13:22:11.503000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:05.204 [2024-12-06 13:22:11.503011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:05.204 [2024-12-06 13:22:11.503022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:05.204 [2024-12-06 13:22:11.503034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:05.204 [2024-12-06 13:22:11.503047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:07.073 1550.83 IOPS, 6.06 MiB/s [2024-12-06T13:22:13.601Z] 1329.29 IOPS, 5.19 MiB/s [2024-12-06T13:22:13.601Z] [2024-12-06 13:22:13.503173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:07.073 [2024-12-06 13:22:13.503240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:07.073 [2024-12-06 13:22:13.503254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:07.073 [2024-12-06 13:22:13.503265] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:28:07.073 [2024-12-06 13:22:13.503277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:08.005 1163.12 IOPS, 4.54 MiB/s 00:28:08.005 Latency(us) 00:28:08.005 [2024-12-06T13:22:14.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.005 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:08.005 Verification LBA range: start 0x0 length 0x4000 00:28:08.005 NVMe0n1 : 8.12 1146.06 4.48 15.77 0.00 110002.06 2219.29 7015926.69 00:28:08.005 [2024-12-06T13:22:14.533Z] =================================================================================================================== 00:28:08.005 [2024-12-06T13:22:14.533Z] Total : 1146.06 4.48 15.77 0.00 110002.06 2219.29 7015926.69 00:28:08.005 { 00:28:08.005 "results": [ 00:28:08.005 { 00:28:08.005 "job": "NVMe0n1", 00:28:08.005 "core_mask": "0x4", 00:28:08.005 "workload": "verify", 00:28:08.005 "status": "finished", 00:28:08.005 "verify_range": { 00:28:08.005 "start": 0, 00:28:08.005 "length": 16384 00:28:08.005 }, 00:28:08.005 "queue_depth": 128, 00:28:08.005 "io_size": 4096, 00:28:08.005 "runtime": 8.119131, 00:28:08.005 "iops": 1146.0586114449934, 00:28:08.005 "mibps": 4.4767914509570055, 00:28:08.005 "io_failed": 128, 00:28:08.005 "io_timeout": 0, 00:28:08.005 "avg_latency_us": 110002.05568150498, 00:28:08.005 "min_latency_us": 2219.287272727273, 00:28:08.005 "max_latency_us": 7015926.69090909 00:28:08.005 } 00:28:08.005 ], 00:28:08.005 "core_count": 1 00:28:08.005 } 00:28:08.939 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:08.939 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:08.939 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:09.228 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:09.228 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:09.228 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:09.228 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:09.534 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:09.534 13:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97330 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97296 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97296 ']' 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97296 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97296 00:28:09.534 killing process with pid 97296 00:28:09.534 Received shutdown signal, test time was about 9.651562 seconds 00:28:09.534 00:28:09.534 Latency(us) 00:28:09.534 [2024-12-06T13:22:16.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.534 [2024-12-06T13:22:16.062Z] =================================================================================================================== 00:28:09.534 [2024-12-06T13:22:16.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97296' 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97296 00:28:09.534 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97296 00:28:09.792 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:10.050 [2024-12-06 13:22:16.564554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97496 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97496 /var/tmp/bdevperf.sock 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97496 ']' 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.308 13:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.308 [2024-12-06 13:22:16.665745] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:10.308 [2024-12-06 13:22:16.665839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97496 ] 00:28:10.566 [2024-12-06 13:22:16.842588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.566 [2024-12-06 13:22:16.891826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.566 13:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.566 13:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:28:10.566 13:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:11.134 13:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:11.701 NVMe0n1 00:28:11.701 13:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97530 00:28:11.701 13:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:11.701 13:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:11.701 Running I/O for 10 seconds... 00:28:12.635 13:22:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:12.896 4738.00 IOPS, 18.51 MiB/s [2024-12-06T13:22:19.424Z] [2024-12-06 13:22:19.270986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.271175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1418850 is same with the state(6) to be set 00:28:12.896 [2024-12-06 13:22:19.273213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.896 [2024-12-06 13:22:19.273275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.896 [2024-12-06 13:22:19.273313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.896 [2024-12-06 13:22:19.273334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.896 [2024-12-06 13:22:19.273357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.896 [2024-12-06 13:22:19.273372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.896 [2024-12-06 13:22:19.273395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.896 [2024-12-06 13:22:19.273410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.896 [2024-12-06 13:22:19.273431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.273964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.273982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.897 [2024-12-06 13:22:19.274888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.897 [2024-12-06 13:22:19.274903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.274921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.274938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.274975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.274994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.275973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.275994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.898 [2024-12-06 13:22:19.276355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.898 [2024-12-06 13:22:19.276427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43856 len:8 PRP1 0x0 PRP2 0x0 00:28:12.898 [2024-12-06 13:22:19.276443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.898 [2024-12-06 13:22:19.276467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.898 [2024-12-06 13:22:19.276480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.898 [2024-12-06 13:22:19.276493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43080 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43088 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43096 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43104 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43120 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43128 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43864 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.276929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.276959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.276975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43872 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.276990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43880 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43888 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43896 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43904 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43912 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43920 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43928 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43936 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43944 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43952 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43960 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43968 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43976 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43984 len:8 PRP1 0x0 PRP2 0x0 00:28:12.899 [2024-12-06 13:22:19.277800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.899 [2024-12-06 13:22:19.277815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.899 [2024-12-06 13:22:19.277827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.899 [2024-12-06 13:22:19.277841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43992 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.277855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.277872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.277883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.277895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44000 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.277909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.277924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.277937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.277968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44008 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.277986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44016 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44024 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44032 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44040 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44048 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44056 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44064 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44072 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44080 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44088 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44096 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43136 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43144 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43152 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43160 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43168 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.278922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.278934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43176 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.278966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.278989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.279001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.279014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43184 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.279030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.279048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.279060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.279073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43192 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.279091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.279109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.900 [2024-12-06 13:22:19.279123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.900 [2024-12-06 13:22:19.279136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43200 len:8 PRP1 0x0 PRP2 0x0 00:28:12.900 [2024-12-06 13:22:19.279151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.279421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.900 [2024-12-06 13:22:19.279447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.900 [2024-12-06 13:22:19.279470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.901 [2024-12-06 13:22:19.279485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.901 [2024-12-06 13:22:19.279505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.901 [2024-12-06 13:22:19.279521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.901 [2024-12-06 13:22:19.279545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.901 [2024-12-06 13:22:19.279560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.901 [2024-12-06 13:22:19.279575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53bf30 is same with the state(6) to be set 00:28:12.901 [2024-12-06 13:22:19.280144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.901 [2024-12-06 13:22:19.280183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:12.901 [2024-12-06 13:22:19.280336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.901 [2024-12-06 13:22:19.280367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x53bf30 with addr=10.0.0.3, port=4420 00:28:12.901 [2024-12-06 13:22:19.280385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53bf30 is same with the state(6) to be set 00:28:12.901 [2024-12-06 13:22:19.280415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:12.901 [2024-12-06 13:22:19.280445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.901 [2024-12-06 13:22:19.280460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.901 [2024-12-06 13:22:19.280477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.901 [2024-12-06 13:22:19.280494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.901 [2024-12-06 13:22:19.280512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.901 13:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:13.837 2692.50 IOPS, 10.52 MiB/s [2024-12-06T13:22:20.365Z] [2024-12-06 13:22:20.280703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.837 [2024-12-06 13:22:20.280787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x53bf30 with addr=10.0.0.3, port=4420 00:28:13.837 [2024-12-06 13:22:20.280804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53bf30 is same with the state(6) to be set 00:28:13.837 [2024-12-06 13:22:20.280833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:13.837 [2024-12-06 13:22:20.280854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.837 [2024-12-06 13:22:20.280866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.837 [2024-12-06 13:22:20.280878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.837 [2024-12-06 13:22:20.280889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.837 [2024-12-06 13:22:20.280900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.837 13:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:14.403 [2024-12-06 13:22:20.672127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:14.403 13:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97530 00:28:14.920 1795.00 IOPS, 7.01 MiB/s [2024-12-06T13:22:21.448Z] [2024-12-06 13:22:21.297460] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:16.833 1346.25 IOPS, 5.26 MiB/s [2024-12-06T13:22:24.293Z] 2382.00 IOPS, 9.30 MiB/s [2024-12-06T13:22:25.219Z] 3276.33 IOPS, 12.80 MiB/s [2024-12-06T13:22:26.151Z] 3967.86 IOPS, 15.50 MiB/s [2024-12-06T13:22:27.522Z] 4541.88 IOPS, 17.74 MiB/s [2024-12-06T13:22:28.479Z] 4973.44 IOPS, 19.43 MiB/s 00:28:21.951 Latency(us) 00:28:21.951 [2024-12-06T13:22:28.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.951 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:21.951 Verification LBA range: start 0x0 length 0x4000 00:28:21.951 NVMe0n1 : 10.01 5143.39 20.09 0.00 0.00 24836.42 2219.29 3050402.91 00:28:21.951 [2024-12-06T13:22:28.479Z] =================================================================================================================== 00:28:21.951 [2024-12-06T13:22:28.479Z] Total : 5143.39 20.09 0.00 0.00 24836.42 2219.29 3050402.91 00:28:21.951 { 00:28:21.951 "results": [ 00:28:21.951 { 00:28:21.951 "job": "NVMe0n1", 00:28:21.951 "core_mask": "0x4", 00:28:21.951 "workload": "verify", 00:28:21.951 "status": "finished", 00:28:21.951 "verify_range": { 00:28:21.951 "start": 0, 00:28:21.951 "length": 16384 00:28:21.951 }, 00:28:21.951 "queue_depth": 128, 00:28:21.951 "io_size": 4096, 00:28:21.951 "runtime": 10.007011, 00:28:21.951 "iops": 5143.393966490094, 00:28:21.951 "mibps": 20.091382681601928, 00:28:21.951 "io_failed": 0, 00:28:21.951 "io_timeout": 0, 00:28:21.951 "avg_latency_us": 24836.417540738646, 00:28:21.951 "min_latency_us": 2219.287272727273, 00:28:21.951 "max_latency_us": 3050402.909090909 00:28:21.951 } 00:28:21.951 ], 00:28:21.951 "core_count": 1 00:28:21.952 } 00:28:21.952 13:22:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97649 00:28:21.952 13:22:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:21.952 13:22:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:21.952 Running I/O for 10 seconds... 00:28:22.886 13:22:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:23.147 8143.00 IOPS, 31.81 MiB/s [2024-12-06T13:22:29.675Z] [2024-12-06 13:22:29.434726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.434988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.147 [2024-12-06 13:22:29.435602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.435993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416d40 is same with the state(6) to be set 00:28:23.148 [2024-12-06 13:22:29.436336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.148 [2024-12-06 13:22:29.436716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.148 [2024-12-06 13:22:29.436727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.436985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.436994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.149 [2024-12-06 13:22:29.437525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.149 [2024-12-06 13:22:29.437536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.149 [2024-12-06 13:22:29.437545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.437986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.437995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.150 [2024-12-06 13:22:29.438344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.150 [2024-12-06 13:22:29.438356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.151 [2024-12-06 13:22:29.438376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.151 [2024-12-06 13:22:29.438396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.151 [2024-12-06 13:22:29.438964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.438996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.151 [2024-12-06 13:22:29.439007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.151 [2024-12-06 13:22:29.439015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72976 len:8 PRP1 0x0 PRP2 0x0 00:28:23.151 [2024-12-06 13:22:29.439026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.151 [2024-12-06 13:22:29.439336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:23.151 [2024-12-06 13:22:29.439437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:23.151 [2024-12-06 13:22:29.439545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.151 [2024-12-06 13:22:29.439577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x53bf30 with addr=10.0.0.3, port=4420 00:28:23.152 [2024-12-06 13:22:29.439589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53bf30 is same with the state(6) to be set 00:28:23.152 [2024-12-06 13:22:29.439608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:23.152 [2024-12-06 13:22:29.439625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:23.152 [2024-12-06 13:22:29.439634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:23.152 [2024-12-06 13:22:29.439644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:23.152 [2024-12-06 13:22:29.439654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:23.152 [2024-12-06 13:22:29.439664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:23.152 13:22:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:28:24.095 4519.00 IOPS, 17.65 MiB/s [2024-12-06T13:22:30.623Z] [2024-12-06 13:22:30.439816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.095 [2024-12-06 13:22:30.439892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x53bf30 with addr=10.0.0.3, port=4420 00:28:24.095 [2024-12-06 13:22:30.439908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53bf30 is same with the state(6) to be set 00:28:24.095 [2024-12-06 13:22:30.439935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:24.095 [2024-12-06 13:22:30.439970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:24.095 [2024-12-06 13:22:30.439982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:24.095 [2024-12-06 13:22:30.439993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:24.095 [2024-12-06 13:22:30.440005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:24.095 [2024-12-06 13:22:30.440016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:25.029 3012.67 IOPS, 11.77 MiB/s [2024-12-06T13:22:31.557Z] [2024-12-06 13:22:31.440195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.029 [2024-12-06 13:22:31.440280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x53bf30 with addr=10.0.0.3, port=4420 00:28:25.029 [2024-12-06 13:22:31.440298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53bf30 is same with the state(6) to be set 00:28:25.029 [2024-12-06 13:22:31.440327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:25.029 [2024-12-06 13:22:31.440352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:25.029 [2024-12-06 13:22:31.440368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:25.029 [2024-12-06 13:22:31.440380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:25.029 [2024-12-06 13:22:31.440395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:25.029 [2024-12-06 13:22:31.440413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:25.964 2259.50 IOPS, 8.83 MiB/s [2024-12-06T13:22:32.492Z] [2024-12-06 13:22:32.444254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.964 [2024-12-06 13:22:32.444349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x53bf30 with addr=10.0.0.3, port=4420 00:28:25.964 [2024-12-06 13:22:32.444367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53bf30 is same with the state(6) to be set 00:28:25.964 [2024-12-06 13:22:32.444638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53bf30 (9): Bad file descriptor 00:28:25.964 [2024-12-06 13:22:32.444908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:25.964 [2024-12-06 13:22:32.444930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:25.964 [2024-12-06 13:22:32.444956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:25.964 [2024-12-06 13:22:32.444970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:25.964 [2024-12-06 13:22:32.444982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:25.964 13:22:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:26.531 [2024-12-06 13:22:32.881544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:26.531 13:22:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97649 00:28:27.097 1807.60 IOPS, 7.06 MiB/s [2024-12-06T13:22:33.625Z] [2024-12-06 13:22:33.468724] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:28:29.054 2683.67 IOPS, 10.48 MiB/s [2024-12-06T13:22:36.514Z] 3491.43 IOPS, 13.64 MiB/s [2024-12-06T13:22:37.446Z] 4126.62 IOPS, 16.12 MiB/s [2024-12-06T13:22:38.378Z] 4619.56 IOPS, 18.05 MiB/s [2024-12-06T13:22:38.378Z] 4972.40 IOPS, 19.42 MiB/s 00:28:31.850 Latency(us) 00:28:31.850 [2024-12-06T13:22:38.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.850 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:31.850 Verification LBA range: start 0x0 length 0x4000 00:28:31.850 NVMe0n1 : 10.01 4973.81 19.43 3331.13 0.00 15380.17 912.29 3019898.88 00:28:31.850 [2024-12-06T13:22:38.378Z] =================================================================================================================== 00:28:31.850 [2024-12-06T13:22:38.378Z] Total : 4973.81 19.43 3331.13 0.00 15380.17 0.00 3019898.88 00:28:31.850 { 00:28:31.850 "results": [ 00:28:31.850 { 00:28:31.850 "job": "NVMe0n1", 00:28:31.850 "core_mask": "0x4", 00:28:31.850 "workload": "verify", 00:28:31.850 "status": "finished", 00:28:31.850 "verify_range": { 00:28:31.850 "start": 0, 00:28:31.850 "length": 16384 00:28:31.850 }, 00:28:31.850 "queue_depth": 128, 00:28:31.850 "io_size": 4096, 00:28:31.850 "runtime": 10.006812, 00:28:31.850 "iops": 4973.811839375017, 00:28:31.850 "mibps": 19.42895249755866, 00:28:31.850 "io_failed": 33334, 00:28:31.850 "io_timeout": 0, 00:28:31.850 "avg_latency_us": 15380.165373203554, 00:28:31.850 "min_latency_us": 912.290909090909, 00:28:31.850 "max_latency_us": 3019898.88 00:28:31.850 } 00:28:31.850 ], 00:28:31.850 "core_count": 1 00:28:31.850 } 00:28:31.850 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97496 00:28:31.850 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97496 ']' 00:28:31.850 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97496 00:28:31.850 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:28:31.850 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.850 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97496 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97496' 00:28:32.109 killing process with pid 97496 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97496 00:28:32.109 Received shutdown signal, test time was about 10.000000 seconds 00:28:32.109 00:28:32.109 Latency(us) 00:28:32.109 [2024-12-06T13:22:38.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.109 [2024-12-06T13:22:38.637Z] =================================================================================================================== 00:28:32.109 [2024-12-06T13:22:38.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97496 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97775 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97775 /var/tmp/bdevperf.sock 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97775 ']' 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.109 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:32.109 [2024-12-06 13:22:38.588680] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:32.109 [2024-12-06 13:22:38.589018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97775 ] 00:28:32.367 [2024-12-06 13:22:38.736504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.367 [2024-12-06 13:22:38.784514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.625 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.625 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:28:32.625 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97785 00:28:32.625 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97775 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:28:32.625 13:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:28:32.884 13:22:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:33.451 NVMe0n1 00:28:33.451 13:22:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97843 00:28:33.451 13:22:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:33.451 13:22:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:28:33.709 Running I/O for 10 seconds... 00:28:34.325 13:22:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:34.859 5271.00 IOPS, 20.59 MiB/s [2024-12-06T13:22:41.387Z] [2024-12-06 13:22:41.225804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.226996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.860 [2024-12-06 13:22:41.227173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419a90 is same with the state(6) to be set 00:28:34.861 [2024-12-06 13:22:41.227855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.227897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.227934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.227982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.861 [2024-12-06 13:22:41.228401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.861 [2024-12-06 13:22:41.228411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.228975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.228993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.862 [2024-12-06 13:22:41.229367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.862 [2024-12-06 13:22:41.229376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.229937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.229979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.863 [2024-12-06 13:22:41.230426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.863 [2024-12-06 13:22:41.230435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.230988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.230999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.864 [2024-12-06 13:22:41.231230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:34.864 [2024-12-06 13:22:41.231273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:34.864 [2024-12-06 13:22:41.231282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111744 len:8 PRP1 0x0 PRP2 0x0 00:28:34.864 [2024-12-06 13:22:41.231293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.864 [2024-12-06 13:22:41.231624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:34.864 [2024-12-06 13:22:41.231726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab4f30 (9): Bad file descriptor 00:28:34.864 [2024-12-06 13:22:41.231887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.864 [2024-12-06 13:22:41.231922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab4f30 with addr=10.0.0.3, port=4420 00:28:34.864 [2024-12-06 13:22:41.231960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab4f30 is same with the state(6) to be set 00:28:34.864 [2024-12-06 13:22:41.231995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab4f30 (9): Bad file descriptor 00:28:34.864 [2024-12-06 13:22:41.232025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:28:34.864 [2024-12-06 13:22:41.232044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:28:34.864 [2024-12-06 13:22:41.232062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:34.864 [2024-12-06 13:22:41.232078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:28:34.864 [2024-12-06 13:22:41.232093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:34.864 13:22:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97843 00:28:36.730 4132.00 IOPS, 16.14 MiB/s [2024-12-06T13:22:43.258Z] 2754.67 IOPS, 10.76 MiB/s [2024-12-06T13:22:43.258Z] [2024-12-06 13:22:43.232288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.730 [2024-12-06 13:22:43.232366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab4f30 with addr=10.0.0.3, port=4420 00:28:36.730 [2024-12-06 13:22:43.232385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab4f30 is same with the state(6) to be set 00:28:36.730 [2024-12-06 13:22:43.232429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab4f30 (9): Bad file descriptor 00:28:36.730 [2024-12-06 13:22:43.232454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:28:36.730 [2024-12-06 13:22:43.232465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:28:36.730 [2024-12-06 13:22:43.232477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:36.730 [2024-12-06 13:22:43.232488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:28:36.730 [2024-12-06 13:22:43.232500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:38.596 2066.00 IOPS, 8.07 MiB/s [2024-12-06T13:22:45.382Z] 1652.80 IOPS, 6.46 MiB/s [2024-12-06T13:22:45.382Z] [2024-12-06 13:22:45.232695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.854 [2024-12-06 13:22:45.232768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab4f30 with addr=10.0.0.3, port=4420 00:28:38.854 [2024-12-06 13:22:45.232786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab4f30 is same with the state(6) to be set 00:28:38.854 [2024-12-06 13:22:45.232814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab4f30 (9): Bad file descriptor 00:28:38.854 [2024-12-06 13:22:45.232835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:28:38.854 [2024-12-06 13:22:45.232845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:28:38.854 [2024-12-06 13:22:45.232856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:38.854 [2024-12-06 13:22:45.232869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:28:38.854 [2024-12-06 13:22:45.232881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:40.752 1377.33 IOPS, 5.38 MiB/s [2024-12-06T13:22:47.280Z] 1180.57 IOPS, 4.61 MiB/s [2024-12-06T13:22:47.280Z] [2024-12-06 13:22:47.233019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:40.752 [2024-12-06 13:22:47.233285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:28:40.752 [2024-12-06 13:22:47.233309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:28:40.752 [2024-12-06 13:22:47.233320] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:28:40.752 [2024-12-06 13:22:47.233334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:28:41.948 1033.00 IOPS, 4.04 MiB/s 00:28:41.948 Latency(us) 00:28:41.948 [2024-12-06T13:22:48.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.948 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:28:41.948 NVMe0n1 : 8.22 1005.32 3.93 15.57 0.00 125256.50 4498.15 7015926.69 00:28:41.948 [2024-12-06T13:22:48.476Z] =================================================================================================================== 00:28:41.948 [2024-12-06T13:22:48.476Z] Total : 1005.32 3.93 15.57 0.00 125256.50 4498.15 7015926.69 00:28:41.948 { 00:28:41.948 "results": [ 00:28:41.948 { 00:28:41.948 "job": "NVMe0n1", 00:28:41.948 "core_mask": "0x4", 00:28:41.948 "workload": "randread", 00:28:41.948 "status": "finished", 00:28:41.948 "queue_depth": 128, 00:28:41.948 "io_size": 4096, 00:28:41.948 "runtime": 8.220234, 00:28:41.948 "iops": 1005.324179336014, 00:28:41.948 "mibps": 3.927047575531305, 00:28:41.948 "io_failed": 128, 00:28:41.948 "io_timeout": 0, 00:28:41.948 "avg_latency_us": 125256.49643123322, 00:28:41.948 "min_latency_us": 4498.152727272727, 00:28:41.948 "max_latency_us": 7015926.69090909 00:28:41.948 } 00:28:41.948 ], 00:28:41.948 "core_count": 1 00:28:41.948 } 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:41.948 Attaching 5 probes... 00:28:41.948 1669.742957: reset bdev controller NVMe0 00:28:41.948 1669.915421: reconnect bdev controller NVMe0 00:28:41.948 3670.257651: reconnect delay bdev controller NVMe0 00:28:41.948 3670.285885: reconnect bdev controller NVMe0 00:28:41.948 5670.675328: reconnect delay bdev controller NVMe0 00:28:41.948 5670.702664: reconnect bdev controller NVMe0 00:28:41.948 7671.116039: reconnect delay bdev controller NVMe0 00:28:41.948 7671.141426: reconnect bdev controller NVMe0 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97785 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97775 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97775 ']' 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97775 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97775 00:28:41.948 killing process with pid 97775 00:28:41.948 Received shutdown signal, test time was about 8.290973 seconds 00:28:41.948 00:28:41.948 Latency(us) 00:28:41.948 [2024-12-06T13:22:48.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.948 [2024-12-06T13:22:48.476Z] =================================================================================================================== 00:28:41.948 [2024-12-06T13:22:48.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97775' 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97775 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97775 00:28:41.948 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:42.515 13:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:42.516 rmmod nvme_tcp 00:28:42.516 rmmod nvme_fabrics 00:28:42.774 rmmod nvme_keyring 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 97213 ']' 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 97213 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97213 ']' 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97213 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97213 00:28:42.774 killing process with pid 97213 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97213' 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97213 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97213 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:42.774 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:28:43.033 00:28:43.033 real 0m47.121s 00:28:43.033 user 2m20.078s 00:28:43.033 sys 0m4.839s 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.033 13:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:43.033 ************************************ 00:28:43.033 END TEST nvmf_timeout 00:28:43.033 ************************************ 00:28:43.291 13:22:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:28:43.291 13:22:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:43.291 00:28:43.291 real 5m40.677s 00:28:43.291 user 14m47.336s 00:28:43.291 sys 1m2.059s 00:28:43.291 13:22:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.291 13:22:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.291 ************************************ 00:28:43.291 END TEST nvmf_host 00:28:43.291 ************************************ 00:28:43.291 13:22:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:43.291 13:22:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:43.291 13:22:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:43.291 13:22:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:43.291 13:22:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.291 13:22:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.291 ************************************ 00:28:43.291 START TEST nvmf_target_core_interrupt_mode 00:28:43.291 ************************************ 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:43.291 * Looking for test storage... 00:28:43.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.291 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:43.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.550 --rc genhtml_branch_coverage=1 00:28:43.550 --rc genhtml_function_coverage=1 00:28:43.550 --rc genhtml_legend=1 00:28:43.550 --rc geninfo_all_blocks=1 00:28:43.550 --rc geninfo_unexecuted_blocks=1 00:28:43.550 00:28:43.550 ' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:43.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.550 --rc genhtml_branch_coverage=1 00:28:43.550 --rc genhtml_function_coverage=1 00:28:43.550 --rc genhtml_legend=1 00:28:43.550 --rc geninfo_all_blocks=1 00:28:43.550 --rc geninfo_unexecuted_blocks=1 00:28:43.550 00:28:43.550 ' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:43.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.550 --rc genhtml_branch_coverage=1 00:28:43.550 --rc genhtml_function_coverage=1 00:28:43.550 --rc genhtml_legend=1 00:28:43.550 --rc geninfo_all_blocks=1 00:28:43.550 --rc geninfo_unexecuted_blocks=1 00:28:43.550 00:28:43.550 ' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:43.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.550 --rc genhtml_branch_coverage=1 00:28:43.550 --rc genhtml_function_coverage=1 00:28:43.550 --rc genhtml_legend=1 00:28:43.550 --rc geninfo_all_blocks=1 00:28:43.550 --rc geninfo_unexecuted_blocks=1 00:28:43.550 00:28:43.550 ' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.550 ************************************ 00:28:43.550 START TEST nvmf_abort 00:28:43.550 ************************************ 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:43.550 * Looking for test storage... 00:28:43.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:43.550 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:43.550 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:43.550 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.550 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.551 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.810 --rc genhtml_branch_coverage=1 00:28:43.810 --rc genhtml_function_coverage=1 00:28:43.810 --rc genhtml_legend=1 00:28:43.810 --rc geninfo_all_blocks=1 00:28:43.810 --rc geninfo_unexecuted_blocks=1 00:28:43.810 00:28:43.810 ' 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.810 --rc genhtml_branch_coverage=1 00:28:43.810 --rc genhtml_function_coverage=1 00:28:43.810 --rc genhtml_legend=1 00:28:43.810 --rc geninfo_all_blocks=1 00:28:43.810 --rc geninfo_unexecuted_blocks=1 00:28:43.810 00:28:43.810 ' 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.810 --rc genhtml_branch_coverage=1 00:28:43.810 --rc genhtml_function_coverage=1 00:28:43.810 --rc genhtml_legend=1 00:28:43.810 --rc geninfo_all_blocks=1 00:28:43.810 --rc geninfo_unexecuted_blocks=1 00:28:43.810 00:28:43.810 ' 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.810 --rc genhtml_branch_coverage=1 00:28:43.810 --rc genhtml_function_coverage=1 00:28:43.810 --rc genhtml_legend=1 00:28:43.810 --rc geninfo_all_blocks=1 00:28:43.810 --rc geninfo_unexecuted_blocks=1 00:28:43.810 00:28:43.810 ' 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.810 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:43.811 Cannot find device "nvmf_init_br" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:43.811 Cannot find device "nvmf_init_br2" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:43.811 Cannot find device "nvmf_tgt_br" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:43.811 Cannot find device "nvmf_tgt_br2" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:43.811 Cannot find device "nvmf_init_br" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:43.811 Cannot find device "nvmf_init_br2" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:43.811 Cannot find device "nvmf_tgt_br" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:43.811 Cannot find device "nvmf_tgt_br2" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:43.811 Cannot find device "nvmf_br" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:43.811 Cannot find device "nvmf_init_if" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:43.811 Cannot find device "nvmf_init_if2" 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:43.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:43.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:43.811 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:44.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:44.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:28:44.070 00:28:44.070 --- 10.0.0.3 ping statistics --- 00:28:44.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.070 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:44.070 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:44.070 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:28:44.070 00:28:44.070 --- 10.0.0.4 ping statistics --- 00:28:44.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.070 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:44.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:28:44.070 00:28:44.070 --- 10.0.0.1 ping statistics --- 00:28:44.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.070 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:44.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:28:44.070 00:28:44.070 --- 10.0.0.2 ping statistics --- 00:28:44.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.070 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.070 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=98279 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 98279 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 98279 ']' 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:44.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.071 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.071 [2024-12-06 13:22:50.584626] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:44.071 [2024-12-06 13:22:50.585720] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:44.071 [2024-12-06 13:22:50.585788] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.329 [2024-12-06 13:22:50.734369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.329 [2024-12-06 13:22:50.774293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.329 [2024-12-06 13:22:50.774350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.329 [2024-12-06 13:22:50.774361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.329 [2024-12-06 13:22:50.774379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.329 [2024-12-06 13:22:50.774387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.329 [2024-12-06 13:22:50.775741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.329 [2024-12-06 13:22:50.775815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.329 [2024-12-06 13:22:50.775823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.329 [2024-12-06 13:22:50.832343] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:44.329 [2024-12-06 13:22:50.833334] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:44.329 [2024-12-06 13:22:50.833335] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:44.329 [2024-12-06 13:22:50.833405] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.589 [2024-12-06 13:22:50.921345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.589 Malloc0 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.589 Delay0 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.589 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.590 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:44.590 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.590 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.590 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.590 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:44.590 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.590 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.590 [2024-12-06 13:22:50.997619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:44.590 13:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.590 13:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:44.590 13:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.590 13:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.590 13:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.590 13:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:44.847 [2024-12-06 13:22:51.183830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:46.745 Initializing NVMe Controllers 00:28:46.745 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:28:46.745 controller IO queue size 128 less than required 00:28:46.745 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:46.745 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:46.745 Initialization complete. Launching workers. 00:28:46.745 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 24753 00:28:46.745 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24810, failed to submit 66 00:28:46.745 success 24753, unsuccessful 57, failed 0 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.745 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.002 rmmod nvme_tcp 00:28:47.002 rmmod nvme_fabrics 00:28:47.002 rmmod nvme_keyring 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 98279 ']' 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 98279 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 98279 ']' 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 98279 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:47.002 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.003 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98279 00:28:47.003 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:47.003 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:47.003 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98279' 00:28:47.003 killing process with pid 98279 00:28:47.003 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 98279 00:28:47.003 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 98279 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:47.260 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:28:47.519 00:28:47.519 real 0m3.963s 00:28:47.519 user 0m8.875s 00:28:47.519 sys 0m1.483s 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.519 ************************************ 00:28:47.519 END TEST nvmf_abort 00:28:47.519 ************************************ 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:47.519 ************************************ 00:28:47.519 START TEST nvmf_ns_hotplug_stress 00:28:47.519 ************************************ 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:47.519 * Looking for test storage... 00:28:47.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:47.519 13:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.519 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.520 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:47.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.796 --rc genhtml_branch_coverage=1 00:28:47.796 --rc genhtml_function_coverage=1 00:28:47.796 --rc genhtml_legend=1 00:28:47.796 --rc geninfo_all_blocks=1 00:28:47.796 --rc geninfo_unexecuted_blocks=1 00:28:47.796 00:28:47.796 ' 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:47.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.796 --rc genhtml_branch_coverage=1 00:28:47.796 --rc genhtml_function_coverage=1 00:28:47.796 --rc genhtml_legend=1 00:28:47.796 --rc geninfo_all_blocks=1 00:28:47.796 --rc geninfo_unexecuted_blocks=1 00:28:47.796 00:28:47.796 ' 00:28:47.796 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:47.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.796 --rc genhtml_branch_coverage=1 00:28:47.796 --rc genhtml_function_coverage=1 00:28:47.796 --rc genhtml_legend=1 00:28:47.796 --rc geninfo_all_blocks=1 00:28:47.797 --rc geninfo_unexecuted_blocks=1 00:28:47.797 00:28:47.797 ' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:47.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.797 --rc genhtml_branch_coverage=1 00:28:47.797 --rc genhtml_function_coverage=1 00:28:47.797 --rc genhtml_legend=1 00:28:47.797 --rc geninfo_all_blocks=1 00:28:47.797 --rc geninfo_unexecuted_blocks=1 00:28:47.797 00:28:47.797 ' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:47.797 Cannot find device "nvmf_init_br" 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:28:47.797 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:47.798 Cannot find device "nvmf_init_br2" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:47.798 Cannot find device "nvmf_tgt_br" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:47.798 Cannot find device "nvmf_tgt_br2" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:47.798 Cannot find device "nvmf_init_br" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:47.798 Cannot find device "nvmf_init_br2" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:47.798 Cannot find device "nvmf_tgt_br" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:47.798 Cannot find device "nvmf_tgt_br2" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:47.798 Cannot find device "nvmf_br" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:47.798 Cannot find device "nvmf_init_if" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:47.798 Cannot find device "nvmf_init_if2" 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:47.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:47.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:47.798 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:48.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:48.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:28:48.057 00:28:48.057 --- 10.0.0.3 ping statistics --- 00:28:48.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.057 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:48.057 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:48.057 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:28:48.057 00:28:48.057 --- 10.0.0.4 ping statistics --- 00:28:48.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.057 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:48.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:28:48.057 00:28:48.057 --- 10.0.0.1 ping statistics --- 00:28:48.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.057 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:48.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:28:48.057 00:28:48.057 --- 10.0.0.2 ping statistics --- 00:28:48.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.057 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=98556 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 98556 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 98556 ']' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.057 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:48.316 [2024-12-06 13:22:54.609353] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.316 [2024-12-06 13:22:54.610981] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:48.316 [2024-12-06 13:22:54.611075] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.316 [2024-12-06 13:22:54.765937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.316 [2024-12-06 13:22:54.799182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.316 [2024-12-06 13:22:54.799249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.316 [2024-12-06 13:22:54.799261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.316 [2024-12-06 13:22:54.799269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.316 [2024-12-06 13:22:54.799276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.316 [2024-12-06 13:22:54.800150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.316 [2024-12-06 13:22:54.800053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.316 [2024-12-06 13:22:54.800143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.574 [2024-12-06 13:22:54.862059] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.574 [2024-12-06 13:22:54.862188] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:48.574 [2024-12-06 13:22:54.862739] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:48.574 [2024-12-06 13:22:54.863251] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:48.574 13:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:48.832 [2024-12-06 13:22:55.272879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.832 13:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:49.089 13:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:49.656 [2024-12-06 13:22:55.905435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:49.656 13:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:49.914 13:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:50.173 Malloc0 00:28:50.173 13:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:50.432 Delay0 00:28:50.432 13:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.998 13:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:51.255 NULL1 00:28:51.255 13:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:51.513 13:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:51.513 13:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=98681 00:28:51.514 13:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:28:51.514 13:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.886 Read completed with error (sct=0, sc=11) 00:28:52.886 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.144 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:53.144 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:53.401 true 00:28:53.401 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:28:53.401 13:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.333 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.590 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:54.590 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:54.848 true 00:28:54.848 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:28:54.848 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.671 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.929 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:55.929 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:56.496 true 00:28:56.496 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:28:56.496 13:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.754 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.013 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:57.013 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:57.271 true 00:28:57.271 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:28:57.271 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.529 13:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.095 13:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:58.095 13:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:58.353 true 00:28:58.353 13:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:28:58.353 13:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.611 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.869 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:58.869 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:59.127 true 00:28:59.127 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:28:59.127 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.693 13:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.951 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:59.951 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:00.209 true 00:29:00.209 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:00.209 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.468 13:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.035 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:01.035 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:01.293 true 00:29:01.293 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:01.293 13:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.615 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.181 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:02.181 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:02.440 true 00:29:02.440 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:02.440 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.815 13:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.073 13:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:04.073 13:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:04.331 true 00:29:04.332 13:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:04.332 13:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.265 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.523 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:05.523 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:05.781 true 00:29:05.781 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:05.781 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.039 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.297 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:06.297 13:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:06.555 true 00:29:06.555 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:06.555 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.145 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.145 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:07.145 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:07.710 true 00:29:07.710 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:07.710 13:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.969 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.227 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:08.227 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:08.485 true 00:29:08.485 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:08.485 13:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.743 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.000 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:09.000 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:09.257 true 00:29:09.514 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:09.514 13:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.773 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.030 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:10.030 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:10.596 true 00:29:10.596 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:10.596 13:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.160 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.417 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:11.417 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:11.678 true 00:29:11.678 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:11.678 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.936 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.194 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:12.194 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:12.452 true 00:29:12.710 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:12.710 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.967 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.225 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:13.225 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:13.492 true 00:29:13.492 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:13.492 13:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.771 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.030 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:14.030 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:14.288 true 00:29:14.546 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:14.546 13:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.112 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.371 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:15.371 13:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:15.629 true 00:29:15.629 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:15.629 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.196 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.454 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:16.454 13:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:16.713 true 00:29:16.713 13:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:16.713 13:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.970 13:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.228 13:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:17.229 13:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:17.487 true 00:29:17.487 13:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:17.487 13:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.053 13:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.311 13:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:18.311 13:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:18.569 true 00:29:18.569 13:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:18.569 13:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.133 13:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.391 13:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:19.391 13:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:19.649 true 00:29:19.649 13:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:19.649 13:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.907 13:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.166 13:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:20.166 13:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:20.738 true 00:29:20.738 13:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:20.738 13:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.997 13:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.257 13:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:21.257 13:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:21.515 true 00:29:21.515 13:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:21.515 13:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.774 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.774 Initializing NVMe Controllers 00:29:21.774 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.774 Controller IO queue size 128, less than required. 00:29:21.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.774 Controller IO queue size 128, less than required. 00:29:21.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.775 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.775 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:21.775 Initialization complete. Launching workers. 00:29:21.775 ======================================================== 00:29:21.775 Latency(us) 00:29:21.775 Device Information : IOPS MiB/s Average min max 00:29:21.775 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1074.30 0.52 34334.57 3261.23 1016148.40 00:29:21.775 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5372.45 2.62 23824.48 3480.83 1309999.23 00:29:21.775 ======================================================== 00:29:21.775 Total : 6446.75 3.15 25575.90 3261.23 1309999.23 00:29:21.775 00:29:22.034 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:22.034 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:22.294 true 00:29:22.294 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98681 00:29:22.294 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (98681) - No such process 00:29:22.294 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 98681 00:29:22.294 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.553 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.812 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:22.812 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:22.812 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:22.812 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:22.812 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:23.379 null0 00:29:23.379 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:23.379 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:23.379 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:23.638 null1 00:29:23.638 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:23.638 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:23.638 13:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:23.896 null2 00:29:23.896 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:23.896 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:23.896 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:24.154 null3 00:29:24.155 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:24.155 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:24.155 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:24.413 null4 00:29:24.413 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:24.413 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:24.413 13:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:24.980 null5 00:29:24.980 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:24.980 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:24.980 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:25.238 null6 00:29:25.238 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:25.238 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:25.238 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:25.497 null7 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:25.497 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:25.498 13:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 99682 99683 99686 99688 99689 99691 99694 99695 00:29:25.756 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:25.756 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:25.756 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.015 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:26.015 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:26.015 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:26.015 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:26.015 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:26.015 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.015 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.016 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.275 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:26.533 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:26.533 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.533 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:26.533 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:26.534 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:26.534 13:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:26.534 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:26.534 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.792 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:27.050 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.307 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.564 13:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:27.564 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:27.564 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:27.564 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:27.564 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:27.822 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:27.822 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:27.823 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.823 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:27.823 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:27.823 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.081 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:28.339 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.597 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:28.597 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:28.597 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:28.597 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:28.597 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.597 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.598 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:28.598 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.598 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.598 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:28.856 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:29.114 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.373 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:29.630 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.630 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.631 13:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:29.631 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.631 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:29.631 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.631 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.631 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.889 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.147 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.404 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:30.662 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:30.662 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.662 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.662 13:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.662 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.920 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:30.921 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:31.179 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.438 13:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:31.697 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.955 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.212 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.471 rmmod nvme_tcp 00:29:32.471 rmmod nvme_fabrics 00:29:32.471 rmmod nvme_keyring 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 98556 ']' 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 98556 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 98556 ']' 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 98556 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98556 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.471 killing process with pid 98556 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98556' 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 98556 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 98556 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:32.471 13:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.730 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:29:32.992 00:29:32.992 real 0m45.385s 00:29:32.992 user 3m25.438s 00:29:32.992 sys 0m19.683s 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.992 ************************************ 00:29:32.992 END TEST nvmf_ns_hotplug_stress 00:29:32.992 ************************************ 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.992 ************************************ 00:29:32.992 START TEST nvmf_delete_subsystem 00:29:32.992 ************************************ 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:32.992 * Looking for test storage... 00:29:32.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:32.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.992 --rc genhtml_branch_coverage=1 00:29:32.992 --rc genhtml_function_coverage=1 00:29:32.992 --rc genhtml_legend=1 00:29:32.992 --rc geninfo_all_blocks=1 00:29:32.992 --rc geninfo_unexecuted_blocks=1 00:29:32.992 00:29:32.992 ' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:32.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.992 --rc genhtml_branch_coverage=1 00:29:32.992 --rc genhtml_function_coverage=1 00:29:32.992 --rc genhtml_legend=1 00:29:32.992 --rc geninfo_all_blocks=1 00:29:32.992 --rc geninfo_unexecuted_blocks=1 00:29:32.992 00:29:32.992 ' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:32.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.992 --rc genhtml_branch_coverage=1 00:29:32.992 --rc genhtml_function_coverage=1 00:29:32.992 --rc genhtml_legend=1 00:29:32.992 --rc geninfo_all_blocks=1 00:29:32.992 --rc geninfo_unexecuted_blocks=1 00:29:32.992 00:29:32.992 ' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:32.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.992 --rc genhtml_branch_coverage=1 00:29:32.992 --rc genhtml_function_coverage=1 00:29:32.992 --rc genhtml_legend=1 00:29:32.992 --rc geninfo_all_blocks=1 00:29:32.992 --rc geninfo_unexecuted_blocks=1 00:29:32.992 00:29:32.992 ' 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.992 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:32.993 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:33.253 Cannot find device "nvmf_init_br" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:33.253 Cannot find device "nvmf_init_br2" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:33.253 Cannot find device "nvmf_tgt_br" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:33.253 Cannot find device "nvmf_tgt_br2" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:33.253 Cannot find device "nvmf_init_br" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:33.253 Cannot find device "nvmf_init_br2" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:33.253 Cannot find device "nvmf_tgt_br" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:33.253 Cannot find device "nvmf_tgt_br2" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:33.253 Cannot find device "nvmf_br" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:33.253 Cannot find device "nvmf_init_if" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:33.253 Cannot find device "nvmf_init_if2" 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:33.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:33.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:33.253 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:33.536 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:33.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:33.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:29:33.537 00:29:33.537 --- 10.0.0.3 ping statistics --- 00:29:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.537 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:33.537 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:33.537 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:29:33.537 00:29:33.537 --- 10.0.0.4 ping statistics --- 00:29:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.537 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:33.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:29:33.537 00:29:33.537 --- 10.0.0.1 ping statistics --- 00:29:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.537 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:33.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:29:33.537 00:29:33.537 --- 10.0.0.2 ping statistics --- 00:29:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.537 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=101083 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 101083 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 101083 ']' 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.537 13:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:33.537 [2024-12-06 13:23:39.982979] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:33.537 [2024-12-06 13:23:39.984828] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:33.537 [2024-12-06 13:23:39.984926] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.819 [2024-12-06 13:23:40.134813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:33.819 [2024-12-06 13:23:40.172047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.819 [2024-12-06 13:23:40.172106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.819 [2024-12-06 13:23:40.172120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.819 [2024-12-06 13:23:40.172130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.819 [2024-12-06 13:23:40.172138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.819 [2024-12-06 13:23:40.172967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.819 [2024-12-06 13:23:40.172975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.819 [2024-12-06 13:23:40.227410] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:33.819 [2024-12-06 13:23:40.228225] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:33.819 [2024-12-06 13:23:40.228258] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:33.819 [2024-12-06 13:23:40.321924] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.819 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.077 [2024-12-06 13:23:40.346172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.077 NULL1 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.077 Delay0 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=101119 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:34.077 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:34.077 [2024-12-06 13:23:40.550613] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:35.979 13:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.979 13:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.979 13:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 starting I/O failed: -6 00:29:36.239 [2024-12-06 13:23:42.585512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5817e0 is same with the state(6) to be set 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Write completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.239 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 [2024-12-06 13:23:42.586222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x580a50 is same with the state(6) to be set 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Write completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 starting I/O failed: -6 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 Read completed with error (sct=0, sc=8) 00:29:36.240 [2024-12-06 13:23:42.587800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc7c00d050 is same with the state(6) to be set 00:29:37.177 [2024-12-06 13:23:43.565654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575aa0 is same with the state(6) to be set 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 [2024-12-06 13:23:43.585935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc7c00d380 is same with the state(6) to be set 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 [2024-12-06 13:23:43.587215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc7c00d6b0 is same with the state(6) to be set 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 [2024-12-06 13:23:43.587470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc7c000c70 is same with the state(6) to be set 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Read completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 Write completed with error (sct=0, sc=8) 00:29:37.177 [2024-12-06 13:23:43.588551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x580c30 is same with the state(6) to be set 00:29:37.177 Initializing NVMe Controllers 00:29:37.177 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.177 Controller IO queue size 128, less than required. 00:29:37.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.177 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:37.177 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:37.177 Initialization complete. Launching workers. 00:29:37.177 ======================================================== 00:29:37.177 Latency(us) 00:29:37.177 Device Information : IOPS MiB/s Average min max 00:29:37.178 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.38 0.08 880644.31 434.36 1012667.80 00:29:37.178 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.44 0.07 1220133.49 3165.52 2003534.39 00:29:37.178 ======================================================== 00:29:37.178 Total : 300.82 0.15 1045907.20 434.36 2003534.39 00:29:37.178 00:29:37.178 [2024-12-06 13:23:43.589365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x575aa0 (9): Bad file descriptor 00:29:37.178 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:37.178 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.178 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:37.178 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101119 00:29:37.178 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101119 00:29:37.747 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (101119) - No such process 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 101119 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 101119 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 101119 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:37.747 [2024-12-06 13:23:44.114307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=101165 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:37.747 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.006 [2024-12-06 13:23:44.309016] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:38.265 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.265 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:38.265 13:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.831 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.831 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:38.831 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:39.455 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:39.455 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:39.455 13:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:39.714 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:39.714 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:39.714 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:40.280 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:40.280 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:40.280 13:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:40.848 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:40.848 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:40.848 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:41.107 Initializing NVMe Controllers 00:29:41.107 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.107 Controller IO queue size 128, less than required. 00:29:41.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.107 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:41.107 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:41.107 Initialization complete. Launching workers. 00:29:41.107 ======================================================== 00:29:41.107 Latency(us) 00:29:41.107 Device Information : IOPS MiB/s Average min max 00:29:41.107 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005944.16 1000146.61 1014521.61 00:29:41.107 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004109.63 1000181.27 1041434.86 00:29:41.107 ======================================================== 00:29:41.107 Total : 256.00 0.12 1005026.90 1000146.61 1041434.86 00:29:41.107 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101165 00:29:41.366 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (101165) - No such process 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 101165 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.366 rmmod nvme_tcp 00:29:41.366 rmmod nvme_fabrics 00:29:41.366 rmmod nvme_keyring 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 101083 ']' 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 101083 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 101083 ']' 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 101083 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101083 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.366 killing process with pid 101083 00:29:41.366 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101083' 00:29:41.367 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 101083 00:29:41.367 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 101083 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:41.625 13:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:41.625 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:29:41.884 00:29:41.884 real 0m8.889s 00:29:41.884 user 0m24.143s 00:29:41.884 sys 0m2.293s 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:41.884 ************************************ 00:29:41.884 END TEST nvmf_delete_subsystem 00:29:41.884 ************************************ 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:41.884 ************************************ 00:29:41.884 START TEST nvmf_host_management 00:29:41.884 ************************************ 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:41.884 * Looking for test storage... 00:29:41.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.884 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.143 --rc genhtml_branch_coverage=1 00:29:42.143 --rc genhtml_function_coverage=1 00:29:42.143 --rc genhtml_legend=1 00:29:42.143 --rc geninfo_all_blocks=1 00:29:42.143 --rc geninfo_unexecuted_blocks=1 00:29:42.143 00:29:42.143 ' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.143 --rc genhtml_branch_coverage=1 00:29:42.143 --rc genhtml_function_coverage=1 00:29:42.143 --rc genhtml_legend=1 00:29:42.143 --rc geninfo_all_blocks=1 00:29:42.143 --rc geninfo_unexecuted_blocks=1 00:29:42.143 00:29:42.143 ' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.143 --rc genhtml_branch_coverage=1 00:29:42.143 --rc genhtml_function_coverage=1 00:29:42.143 --rc genhtml_legend=1 00:29:42.143 --rc geninfo_all_blocks=1 00:29:42.143 --rc geninfo_unexecuted_blocks=1 00:29:42.143 00:29:42.143 ' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.143 --rc genhtml_branch_coverage=1 00:29:42.143 --rc genhtml_function_coverage=1 00:29:42.143 --rc genhtml_legend=1 00:29:42.143 --rc geninfo_all_blocks=1 00:29:42.143 --rc geninfo_unexecuted_blocks=1 00:29:42.143 00:29:42.143 ' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.143 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:42.144 Cannot find device "nvmf_init_br" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:42.144 Cannot find device "nvmf_init_br2" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:42.144 Cannot find device "nvmf_tgt_br" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:42.144 Cannot find device "nvmf_tgt_br2" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:42.144 Cannot find device "nvmf_init_br" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:42.144 Cannot find device "nvmf_init_br2" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:42.144 Cannot find device "nvmf_tgt_br" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:42.144 Cannot find device "nvmf_tgt_br2" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:42.144 Cannot find device "nvmf_br" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:42.144 Cannot find device "nvmf_init_if" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:42.144 Cannot find device "nvmf_init_if2" 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:42.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:29:42.144 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:42.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:42.145 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:29:42.145 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:42.145 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:42.145 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:42.145 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:42.145 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:42.145 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:42.403 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:42.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:42.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:29:42.404 00:29:42.404 --- 10.0.0.3 ping statistics --- 00:29:42.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.404 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:42.404 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:42.404 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:29:42.404 00:29:42.404 --- 10.0.0.4 ping statistics --- 00:29:42.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.404 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:42.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:29:42.404 00:29:42.404 --- 10.0.0.1 ping statistics --- 00:29:42.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.404 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:42.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:29:42.404 00:29:42.404 --- 10.0.0.2 ping statistics --- 00:29:42.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.404 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=101456 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 101456 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101456 ']' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.404 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.663 [2024-12-06 13:23:48.944623] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:42.663 [2024-12-06 13:23:48.945693] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:42.663 [2024-12-06 13:23:48.945761] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.663 [2024-12-06 13:23:49.095261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.663 [2024-12-06 13:23:49.134779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.663 [2024-12-06 13:23:49.134844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.663 [2024-12-06 13:23:49.134858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.663 [2024-12-06 13:23:49.134869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.663 [2024-12-06 13:23:49.134879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.663 [2024-12-06 13:23:49.135828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.663 [2024-12-06 13:23:49.135916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.663 [2024-12-06 13:23:49.135989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:42.663 [2024-12-06 13:23:49.135996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.922 [2024-12-06 13:23:49.191971] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:42.922 [2024-12-06 13:23:49.192059] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:42.922 [2024-12-06 13:23:49.192138] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:42.922 [2024-12-06 13:23:49.192347] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:42.922 [2024-12-06 13:23:49.193139] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.922 [2024-12-06 13:23:49.270297] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:42.922 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.923 Malloc0 00:29:42.923 [2024-12-06 13:23:49.349334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=101513 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 101513 /var/tmp/bdevperf.sock 00:29:42.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101513 ']' 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.923 { 00:29:42.923 "params": { 00:29:42.923 "name": "Nvme$subsystem", 00:29:42.923 "trtype": "$TEST_TRANSPORT", 00:29:42.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.923 "adrfam": "ipv4", 00:29:42.923 "trsvcid": "$NVMF_PORT", 00:29:42.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.923 "hdgst": ${hdgst:-false}, 00:29:42.923 "ddgst": ${ddgst:-false} 00:29:42.923 }, 00:29:42.923 "method": "bdev_nvme_attach_controller" 00:29:42.923 } 00:29:42.923 EOF 00:29:42.923 )") 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:42.923 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:42.923 "params": { 00:29:42.923 "name": "Nvme0", 00:29:42.923 "trtype": "tcp", 00:29:42.923 "traddr": "10.0.0.3", 00:29:42.923 "adrfam": "ipv4", 00:29:42.923 "trsvcid": "4420", 00:29:42.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.923 "hdgst": false, 00:29:42.923 "ddgst": false 00:29:42.923 }, 00:29:42.923 "method": "bdev_nvme_attach_controller" 00:29:42.923 }' 00:29:43.181 [2024-12-06 13:23:49.455010] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:43.181 [2024-12-06 13:23:49.455105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101513 ] 00:29:43.181 [2024-12-06 13:23:49.603311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.181 [2024-12-06 13:23:49.643118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.440 Running I/O for 10 seconds... 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:43.440 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=502 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 502 -ge 100 ']' 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.699 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.959 [2024-12-06 13:23:50.228902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6530 is same with the state(6) to be set 00:29:43.959 [2024-12-06 13:23:50.228982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6530 is same with the state(6) to be set 00:29:43.959 [2024-12-06 13:23:50.228998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6530 is same with the state(6) to be set 00:29:43.959 [2024-12-06 13:23:50.229010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6530 is same with the state(6) to be set 00:29:43.959 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.959 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:43.959 [2024-12-06 13:23:50.233898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.959 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.959 [2024-12-06 13:23:50.233955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.233972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.959 [2024-12-06 13:23:50.233982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.233993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.959 [2024-12-06 13:23:50.234002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.234012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.959 [2024-12-06 13:23:50.234021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.234031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff4130 is same with the state(6) to be set 00:29:43.959 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.959 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.959 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:43.959 [2024-12-06 13:23:50.242110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.959 [2024-12-06 13:23:50.242683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.959 [2024-12-06 13:23:50.242692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.242984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.242993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.243465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.960 [2024-12-06 13:23:50.243474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.960 [2024-12-06 13:23:50.244659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:43.960 task offset: 73728 on job bdev=Nvme0n1 fails 00:29:43.960 00:29:43.960 Latency(us) 00:29:43.960 [2024-12-06T13:23:50.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.960 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.960 Job: Nvme0n1 ended in about 0.46 seconds with error 00:29:43.960 Verification LBA range: start 0x0 length 0x400 00:29:43.960 Nvme0n1 : 0.46 1264.15 79.01 140.46 0.00 43806.14 1779.90 47900.86 00:29:43.960 [2024-12-06T13:23:50.489Z] =================================================================================================================== 00:29:43.961 [2024-12-06T13:23:50.489Z] Total : 1264.15 79.01 140.46 0.00 43806.14 1779.90 47900.86 00:29:43.961 [2024-12-06 13:23:50.246780] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:43.961 [2024-12-06 13:23:50.246804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff4130 (9): Bad file descriptor 00:29:43.961 [2024-12-06 13:23:50.249862] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 101513 00:29:44.892 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (101513) - No such process 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:44.892 { 00:29:44.892 "params": { 00:29:44.892 "name": "Nvme$subsystem", 00:29:44.892 "trtype": "$TEST_TRANSPORT", 00:29:44.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.892 "adrfam": "ipv4", 00:29:44.892 "trsvcid": "$NVMF_PORT", 00:29:44.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.892 "hdgst": ${hdgst:-false}, 00:29:44.892 "ddgst": ${ddgst:-false} 00:29:44.892 }, 00:29:44.892 "method": "bdev_nvme_attach_controller" 00:29:44.892 } 00:29:44.892 EOF 00:29:44.892 )") 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:44.892 13:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:44.892 "params": { 00:29:44.892 "name": "Nvme0", 00:29:44.892 "trtype": "tcp", 00:29:44.892 "traddr": "10.0.0.3", 00:29:44.892 "adrfam": "ipv4", 00:29:44.892 "trsvcid": "4420", 00:29:44.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.892 "hdgst": false, 00:29:44.892 "ddgst": false 00:29:44.892 }, 00:29:44.892 "method": "bdev_nvme_attach_controller" 00:29:44.892 }' 00:29:44.892 [2024-12-06 13:23:51.305570] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:44.892 [2024-12-06 13:23:51.305676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101555 ] 00:29:45.150 [2024-12-06 13:23:51.460081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.150 [2024-12-06 13:23:51.498528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.150 Running I/O for 1 seconds... 00:29:46.522 1472.00 IOPS, 92.00 MiB/s 00:29:46.522 Latency(us) 00:29:46.522 [2024-12-06T13:23:53.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.522 Verification LBA range: start 0x0 length 0x400 00:29:46.522 Nvme0n1 : 1.03 1496.52 93.53 0.00 0.00 41793.66 5183.30 42419.67 00:29:46.522 [2024-12-06T13:23:53.050Z] =================================================================================================================== 00:29:46.522 [2024-12-06T13:23:53.050Z] Total : 1496.52 93.53 0.00 0.00 41793.66 5183.30 42419.67 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:46.522 rmmod nvme_tcp 00:29:46.522 rmmod nvme_fabrics 00:29:46.522 rmmod nvme_keyring 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 101456 ']' 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 101456 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 101456 ']' 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 101456 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101456 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.522 killing process with pid 101456 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101456' 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 101456 00:29:46.522 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 101456 00:29:46.780 [2024-12-06 13:23:53.066098] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:46.780 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:47.039 00:29:47.039 real 0m5.115s 00:29:47.039 user 0m16.218s 00:29:47.039 sys 0m2.305s 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:47.039 ************************************ 00:29:47.039 END TEST nvmf_host_management 00:29:47.039 ************************************ 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.039 ************************************ 00:29:47.039 START TEST nvmf_lvol 00:29:47.039 ************************************ 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:47.039 * Looking for test storage... 00:29:47.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:47.039 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:47.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.297 --rc genhtml_branch_coverage=1 00:29:47.297 --rc genhtml_function_coverage=1 00:29:47.297 --rc genhtml_legend=1 00:29:47.297 --rc geninfo_all_blocks=1 00:29:47.297 --rc geninfo_unexecuted_blocks=1 00:29:47.297 00:29:47.297 ' 00:29:47.297 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:47.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.298 --rc genhtml_branch_coverage=1 00:29:47.298 --rc genhtml_function_coverage=1 00:29:47.298 --rc genhtml_legend=1 00:29:47.298 --rc geninfo_all_blocks=1 00:29:47.298 --rc geninfo_unexecuted_blocks=1 00:29:47.298 00:29:47.298 ' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:47.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.298 --rc genhtml_branch_coverage=1 00:29:47.298 --rc genhtml_function_coverage=1 00:29:47.298 --rc genhtml_legend=1 00:29:47.298 --rc geninfo_all_blocks=1 00:29:47.298 --rc geninfo_unexecuted_blocks=1 00:29:47.298 00:29:47.298 ' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:47.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.298 --rc genhtml_branch_coverage=1 00:29:47.298 --rc genhtml_function_coverage=1 00:29:47.298 --rc genhtml_legend=1 00:29:47.298 --rc geninfo_all_blocks=1 00:29:47.298 --rc geninfo_unexecuted_blocks=1 00:29:47.298 00:29:47.298 ' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:47.298 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:47.298 Cannot find device "nvmf_init_br" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:47.299 Cannot find device "nvmf_init_br2" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:47.299 Cannot find device "nvmf_tgt_br" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:47.299 Cannot find device "nvmf_tgt_br2" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:47.299 Cannot find device "nvmf_init_br" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:47.299 Cannot find device "nvmf_init_br2" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:47.299 Cannot find device "nvmf_tgt_br" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:47.299 Cannot find device "nvmf_tgt_br2" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:47.299 Cannot find device "nvmf_br" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:47.299 Cannot find device "nvmf_init_if" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:47.299 Cannot find device "nvmf_init_if2" 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:47.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:47.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:47.299 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:47.556 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:47.557 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:47.557 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:29:47.557 00:29:47.557 --- 10.0.0.3 ping statistics --- 00:29:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.557 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:47.557 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:47.557 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:29:47.557 00:29:47.557 --- 10.0.0.4 ping statistics --- 00:29:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.557 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:29:47.557 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:47.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:29:47.557 00:29:47.557 --- 10.0.0.1 ping statistics --- 00:29:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.557 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:47.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:29:47.557 00:29:47.557 --- 10.0.0.2 ping statistics --- 00:29:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.557 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=101815 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 101815 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 101815 ']' 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.557 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.897 [2024-12-06 13:23:54.110079] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.897 [2024-12-06 13:23:54.111385] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:47.897 [2024-12-06 13:23:54.111461] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.897 [2024-12-06 13:23:54.264330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.897 [2024-12-06 13:23:54.303611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.897 [2024-12-06 13:23:54.303673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.897 [2024-12-06 13:23:54.303687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.897 [2024-12-06 13:23:54.303697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.897 [2024-12-06 13:23:54.303706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.897 [2024-12-06 13:23:54.304517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.897 [2024-12-06 13:23:54.304655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.897 [2024-12-06 13:23:54.304663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.897 [2024-12-06 13:23:54.363863] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:47.897 [2024-12-06 13:23:54.363934] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.897 [2024-12-06 13:23:54.364664] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:47.897 [2024-12-06 13:23:54.364674] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:47.897 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.897 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:47.897 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.897 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.897 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.173 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:48.430 [2024-12-06 13:23:54.729616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.430 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.689 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:48.689 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.947 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:48.947 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:49.510 13:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:49.768 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c099ac02-ef90-4cc4-9816-bf1df31e191c 00:29:49.768 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c099ac02-ef90-4cc4-9816-bf1df31e191c lvol 20 00:29:50.027 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f0ce1649-9f06-4fb3-a9e3-6f2cf6a11c13 00:29:50.027 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:50.286 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f0ce1649-9f06-4fb3-a9e3-6f2cf6a11c13 00:29:50.544 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:50.802 [2024-12-06 13:23:57.269558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:50.802 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:29:51.370 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101949 00:29:51.370 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:51.370 13:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:52.305 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f0ce1649-9f06-4fb3-a9e3-6f2cf6a11c13 MY_SNAPSHOT 00:29:52.564 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d85517dc-35f3-4502-b934-df8741e4d250 00:29:52.564 13:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f0ce1649-9f06-4fb3-a9e3-6f2cf6a11c13 30 00:29:52.822 13:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d85517dc-35f3-4502-b934-df8741e4d250 MY_CLONE 00:29:53.389 13:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b762f725-df18-4514-9f61-d92617180b95 00:29:53.389 13:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b762f725-df18-4514-9f61-d92617180b95 00:29:53.955 13:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101949 00:30:02.140 Initializing NVMe Controllers 00:30:02.140 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:30:02.140 Controller IO queue size 128, less than required. 00:30:02.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:02.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:02.140 Initialization complete. Launching workers. 00:30:02.140 ======================================================== 00:30:02.140 Latency(us) 00:30:02.140 Device Information : IOPS MiB/s Average min max 00:30:02.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10354.50 40.45 12362.58 4882.42 72853.31 00:30:02.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10371.40 40.51 12343.02 4395.05 71565.47 00:30:02.140 ======================================================== 00:30:02.140 Total : 20725.90 80.96 12352.79 4395.05 72853.31 00:30:02.140 00:30:02.140 13:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:02.140 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f0ce1649-9f06-4fb3-a9e3-6f2cf6a11c13 00:30:02.140 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c099ac02-ef90-4cc4-9816-bf1df31e191c 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.397 rmmod nvme_tcp 00:30:02.397 rmmod nvme_fabrics 00:30:02.397 rmmod nvme_keyring 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 101815 ']' 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 101815 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 101815 ']' 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 101815 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101815 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:02.397 killing process with pid 101815 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101815' 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 101815 00:30:02.397 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 101815 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:02.655 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:30:02.911 00:30:02.911 real 0m15.906s 00:30:02.911 user 0m56.810s 00:30:02.911 sys 0m5.687s 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.911 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:02.911 ************************************ 00:30:02.911 END TEST nvmf_lvol 00:30:02.911 ************************************ 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:02.912 ************************************ 00:30:02.912 START TEST nvmf_lvs_grow 00:30:02.912 ************************************ 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:02.912 * Looking for test storage... 00:30:02.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:02.912 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.171 --rc genhtml_branch_coverage=1 00:30:03.171 --rc genhtml_function_coverage=1 00:30:03.171 --rc genhtml_legend=1 00:30:03.171 --rc geninfo_all_blocks=1 00:30:03.171 --rc geninfo_unexecuted_blocks=1 00:30:03.171 00:30:03.171 ' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.171 --rc genhtml_branch_coverage=1 00:30:03.171 --rc genhtml_function_coverage=1 00:30:03.171 --rc genhtml_legend=1 00:30:03.171 --rc geninfo_all_blocks=1 00:30:03.171 --rc geninfo_unexecuted_blocks=1 00:30:03.171 00:30:03.171 ' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.171 --rc genhtml_branch_coverage=1 00:30:03.171 --rc genhtml_function_coverage=1 00:30:03.171 --rc genhtml_legend=1 00:30:03.171 --rc geninfo_all_blocks=1 00:30:03.171 --rc geninfo_unexecuted_blocks=1 00:30:03.171 00:30:03.171 ' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.171 --rc genhtml_branch_coverage=1 00:30:03.171 --rc genhtml_function_coverage=1 00:30:03.171 --rc genhtml_legend=1 00:30:03.171 --rc geninfo_all_blocks=1 00:30:03.171 --rc geninfo_unexecuted_blocks=1 00:30:03.171 00:30:03.171 ' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.171 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:03.172 Cannot find device "nvmf_init_br" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:03.172 Cannot find device "nvmf_init_br2" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:03.172 Cannot find device "nvmf_tgt_br" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:03.172 Cannot find device "nvmf_tgt_br2" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:03.172 Cannot find device "nvmf_init_br" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:03.172 Cannot find device "nvmf_init_br2" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:03.172 Cannot find device "nvmf_tgt_br" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:03.172 Cannot find device "nvmf_tgt_br2" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:03.172 Cannot find device "nvmf_br" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:03.172 Cannot find device "nvmf_init_if" 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:30:03.172 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:03.430 Cannot find device "nvmf_init_if2" 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:03.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:03.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:03.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:03.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:30:03.430 00:30:03.430 --- 10.0.0.3 ping statistics --- 00:30:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.430 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:03.430 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:03.430 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:30:03.430 00:30:03.430 --- 10.0.0.4 ping statistics --- 00:30:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.430 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:03.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:30:03.430 00:30:03.430 --- 10.0.0.1 ping statistics --- 00:30:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.430 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:30:03.430 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:03.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:30:03.430 00:30:03.430 --- 10.0.0.2 ping statistics --- 00:30:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.430 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=102359 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 102359 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 102359 ']' 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.748 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.748 [2024-12-06 13:24:10.047860] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.748 [2024-12-06 13:24:10.048967] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:03.748 [2024-12-06 13:24:10.049027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.748 [2024-12-06 13:24:10.200103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.748 [2024-12-06 13:24:10.244705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.748 [2024-12-06 13:24:10.244786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.748 [2024-12-06 13:24:10.244802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.748 [2024-12-06 13:24:10.244813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.748 [2024-12-06 13:24:10.244821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.748 [2024-12-06 13:24:10.245226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.006 [2024-12-06 13:24:10.305277] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.006 [2024-12-06 13:24:10.305623] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:04.006 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.006 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:04.006 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.006 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:04.006 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:04.006 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.006 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:04.263 [2024-12-06 13:24:10.662282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:04.263 ************************************ 00:30:04.263 START TEST lvs_grow_clean 00:30:04.263 ************************************ 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:04.263 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:04.264 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:04.264 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:04.264 13:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:04.828 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:04.828 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:05.085 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:05.085 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:05.085 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:05.342 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:05.342 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:05.342 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 lvol 150 00:30:05.600 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4aa18bdb-c59f-4a4a-9348-5e895019e734 00:30:05.600 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:05.600 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:05.859 [2024-12-06 13:24:12.365863] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:05.859 [2024-12-06 13:24:12.366034] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:05.859 true 00:30:06.117 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:06.117 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:06.376 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:06.376 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:06.635 13:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4aa18bdb-c59f-4a4a-9348-5e895019e734 00:30:06.893 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:07.151 [2024-12-06 13:24:13.534215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:07.151 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102512 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102512 /var/tmp/bdevperf.sock 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 102512 ']' 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.409 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.409 [2024-12-06 13:24:13.871672] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:07.409 [2024-12-06 13:24:13.871761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102512 ] 00:30:07.666 [2024-12-06 13:24:14.016536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.666 [2024-12-06 13:24:14.050155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.666 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.666 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:07.666 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:08.234 Nvme0n1 00:30:08.234 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:08.493 [ 00:30:08.493 { 00:30:08.493 "aliases": [ 00:30:08.493 "4aa18bdb-c59f-4a4a-9348-5e895019e734" 00:30:08.493 ], 00:30:08.493 "assigned_rate_limits": { 00:30:08.493 "r_mbytes_per_sec": 0, 00:30:08.493 "rw_ios_per_sec": 0, 00:30:08.493 "rw_mbytes_per_sec": 0, 00:30:08.493 "w_mbytes_per_sec": 0 00:30:08.493 }, 00:30:08.493 "block_size": 4096, 00:30:08.493 "claimed": false, 00:30:08.493 "driver_specific": { 00:30:08.493 "mp_policy": "active_passive", 00:30:08.493 "nvme": [ 00:30:08.493 { 00:30:08.493 "ctrlr_data": { 00:30:08.493 "ana_reporting": false, 00:30:08.493 "cntlid": 1, 00:30:08.493 "firmware_revision": "25.01", 00:30:08.493 "model_number": "SPDK bdev Controller", 00:30:08.493 "multi_ctrlr": true, 00:30:08.493 "oacs": { 00:30:08.493 "firmware": 0, 00:30:08.493 "format": 0, 00:30:08.493 "ns_manage": 0, 00:30:08.493 "security": 0 00:30:08.493 }, 00:30:08.493 "serial_number": "SPDK0", 00:30:08.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.493 "vendor_id": "0x8086" 00:30:08.493 }, 00:30:08.493 "ns_data": { 00:30:08.493 "can_share": true, 00:30:08.493 "id": 1 00:30:08.493 }, 00:30:08.493 "trid": { 00:30:08.493 "adrfam": "IPv4", 00:30:08.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.494 "traddr": "10.0.0.3", 00:30:08.494 "trsvcid": "4420", 00:30:08.494 "trtype": "TCP" 00:30:08.494 }, 00:30:08.494 "vs": { 00:30:08.494 "nvme_version": "1.3" 00:30:08.494 } 00:30:08.494 } 00:30:08.494 ] 00:30:08.494 }, 00:30:08.494 "memory_domains": [ 00:30:08.494 { 00:30:08.494 "dma_device_id": "system", 00:30:08.494 "dma_device_type": 1 00:30:08.494 } 00:30:08.494 ], 00:30:08.494 "name": "Nvme0n1", 00:30:08.494 "num_blocks": 38912, 00:30:08.494 "numa_id": -1, 00:30:08.494 "product_name": "NVMe disk", 00:30:08.494 "supported_io_types": { 00:30:08.494 "abort": true, 00:30:08.494 "compare": true, 00:30:08.494 "compare_and_write": true, 00:30:08.494 "copy": true, 00:30:08.494 "flush": true, 00:30:08.494 "get_zone_info": false, 00:30:08.494 "nvme_admin": true, 00:30:08.494 "nvme_io": true, 00:30:08.494 "nvme_io_md": false, 00:30:08.494 "nvme_iov_md": false, 00:30:08.494 "read": true, 00:30:08.494 "reset": true, 00:30:08.494 "seek_data": false, 00:30:08.494 "seek_hole": false, 00:30:08.494 "unmap": true, 00:30:08.494 "write": true, 00:30:08.494 "write_zeroes": true, 00:30:08.494 "zcopy": false, 00:30:08.494 "zone_append": false, 00:30:08.494 "zone_management": false 00:30:08.494 }, 00:30:08.494 "uuid": "4aa18bdb-c59f-4a4a-9348-5e895019e734", 00:30:08.494 "zoned": false 00:30:08.494 } 00:30:08.494 ] 00:30:08.494 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102543 00:30:08.494 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:08.494 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:08.494 Running I/O for 10 seconds... 00:30:09.510 Latency(us) 00:30:09.510 [2024-12-06T13:24:16.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:09.510 Nvme0n1 : 1.00 7225.00 28.22 0.00 0.00 0.00 0.00 0.00 00:30:09.510 [2024-12-06T13:24:16.038Z] =================================================================================================================== 00:30:09.510 [2024-12-06T13:24:16.038Z] Total : 7225.00 28.22 0.00 0.00 0.00 0.00 0.00 00:30:09.510 00:30:10.446 13:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:10.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.446 Nvme0n1 : 2.00 7231.50 28.25 0.00 0.00 0.00 0.00 0.00 00:30:10.446 [2024-12-06T13:24:16.974Z] =================================================================================================================== 00:30:10.446 [2024-12-06T13:24:16.974Z] Total : 7231.50 28.25 0.00 0.00 0.00 0.00 0.00 00:30:10.446 00:30:11.012 true 00:30:11.012 13:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:11.012 13:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:11.269 13:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:11.269 13:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:11.269 13:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 102543 00:30:11.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.526 Nvme0n1 : 3.00 7077.33 27.65 0.00 0.00 0.00 0.00 0.00 00:30:11.526 [2024-12-06T13:24:18.054Z] =================================================================================================================== 00:30:11.526 [2024-12-06T13:24:18.054Z] Total : 7077.33 27.65 0.00 0.00 0.00 0.00 0.00 00:30:11.526 00:30:12.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.457 Nvme0n1 : 4.00 7132.75 27.86 0.00 0.00 0.00 0.00 0.00 00:30:12.457 [2024-12-06T13:24:18.985Z] =================================================================================================================== 00:30:12.457 [2024-12-06T13:24:18.985Z] Total : 7132.75 27.86 0.00 0.00 0.00 0.00 0.00 00:30:12.457 00:30:13.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.831 Nvme0n1 : 5.00 7131.60 27.86 0.00 0.00 0.00 0.00 0.00 00:30:13.831 [2024-12-06T13:24:20.359Z] =================================================================================================================== 00:30:13.831 [2024-12-06T13:24:20.359Z] Total : 7131.60 27.86 0.00 0.00 0.00 0.00 0.00 00:30:13.831 00:30:14.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.767 Nvme0n1 : 6.00 7106.00 27.76 0.00 0.00 0.00 0.00 0.00 00:30:14.767 [2024-12-06T13:24:21.295Z] =================================================================================================================== 00:30:14.767 [2024-12-06T13:24:21.295Z] Total : 7106.00 27.76 0.00 0.00 0.00 0.00 0.00 00:30:14.767 00:30:15.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.713 Nvme0n1 : 7.00 7137.43 27.88 0.00 0.00 0.00 0.00 0.00 00:30:15.713 [2024-12-06T13:24:22.241Z] =================================================================================================================== 00:30:15.713 [2024-12-06T13:24:22.242Z] Total : 7137.43 27.88 0.00 0.00 0.00 0.00 0.00 00:30:15.714 00:30:16.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.647 Nvme0n1 : 8.00 7116.38 27.80 0.00 0.00 0.00 0.00 0.00 00:30:16.647 [2024-12-06T13:24:23.175Z] =================================================================================================================== 00:30:16.647 [2024-12-06T13:24:23.175Z] Total : 7116.38 27.80 0.00 0.00 0.00 0.00 0.00 00:30:16.647 00:30:17.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.584 Nvme0n1 : 9.00 7051.56 27.55 0.00 0.00 0.00 0.00 0.00 00:30:17.584 [2024-12-06T13:24:24.112Z] =================================================================================================================== 00:30:17.584 [2024-12-06T13:24:24.112Z] Total : 7051.56 27.55 0.00 0.00 0.00 0.00 0.00 00:30:17.584 00:30:18.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.520 Nvme0n1 : 10.00 7035.20 27.48 0.00 0.00 0.00 0.00 0.00 00:30:18.520 [2024-12-06T13:24:25.048Z] =================================================================================================================== 00:30:18.520 [2024-12-06T13:24:25.048Z] Total : 7035.20 27.48 0.00 0.00 0.00 0.00 0.00 00:30:18.520 00:30:18.520 00:30:18.520 Latency(us) 00:30:18.520 [2024-12-06T13:24:25.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.520 Nvme0n1 : 10.02 7044.13 27.52 0.00 0.00 18157.24 6494.02 45994.36 00:30:18.520 [2024-12-06T13:24:25.048Z] =================================================================================================================== 00:30:18.520 [2024-12-06T13:24:25.048Z] Total : 7044.13 27.52 0.00 0.00 18157.24 6494.02 45994.36 00:30:18.520 { 00:30:18.520 "results": [ 00:30:18.520 { 00:30:18.520 "job": "Nvme0n1", 00:30:18.520 "core_mask": "0x2", 00:30:18.520 "workload": "randwrite", 00:30:18.520 "status": "finished", 00:30:18.520 "queue_depth": 128, 00:30:18.520 "io_size": 4096, 00:30:18.520 "runtime": 10.018419, 00:30:18.520 "iops": 7044.125425378995, 00:30:18.520 "mibps": 27.5161149428867, 00:30:18.520 "io_failed": 0, 00:30:18.520 "io_timeout": 0, 00:30:18.520 "avg_latency_us": 18157.24110872223, 00:30:18.520 "min_latency_us": 6494.021818181818, 00:30:18.520 "max_latency_us": 45994.35636363636 00:30:18.520 } 00:30:18.520 ], 00:30:18.520 "core_count": 1 00:30:18.520 } 00:30:18.520 13:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102512 00:30:18.520 13:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 102512 ']' 00:30:18.520 13:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 102512 00:30:18.520 13:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:18.520 13:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.520 13:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102512 00:30:18.520 killing process with pid 102512 00:30:18.520 Received shutdown signal, test time was about 10.000000 seconds 00:30:18.520 00:30:18.520 Latency(us) 00:30:18.520 [2024-12-06T13:24:25.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.520 [2024-12-06T13:24:25.048Z] =================================================================================================================== 00:30:18.520 [2024-12-06T13:24:25.048Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.520 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:18.520 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:18.520 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102512' 00:30:18.520 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 102512 00:30:18.520 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 102512 00:30:18.826 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:19.103 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:19.361 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:19.361 13:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:19.928 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:19.928 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:19.928 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:20.187 [2024-12-06 13:24:26.473914] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:20.187 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:20.444 2024/12/06 13:24:26 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:35323fc2-5229-4e2a-b9e7-e29b8fa5a594], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:30:20.445 request: 00:30:20.445 { 00:30:20.445 "method": "bdev_lvol_get_lvstores", 00:30:20.445 "params": { 00:30:20.445 "uuid": "35323fc2-5229-4e2a-b9e7-e29b8fa5a594" 00:30:20.445 } 00:30:20.445 } 00:30:20.445 Got JSON-RPC error response 00:30:20.445 GoRPCClient: error on JSON-RPC call 00:30:20.445 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:20.445 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.445 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.445 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.445 13:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:20.702 aio_bdev 00:30:20.702 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4aa18bdb-c59f-4a4a-9348-5e895019e734 00:30:20.702 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4aa18bdb-c59f-4a4a-9348-5e895019e734 00:30:20.702 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:20.702 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:20.702 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:20.702 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:20.702 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:20.960 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4aa18bdb-c59f-4a4a-9348-5e895019e734 -t 2000 00:30:21.218 [ 00:30:21.218 { 00:30:21.218 "aliases": [ 00:30:21.218 "lvs/lvol" 00:30:21.218 ], 00:30:21.218 "assigned_rate_limits": { 00:30:21.218 "r_mbytes_per_sec": 0, 00:30:21.218 "rw_ios_per_sec": 0, 00:30:21.218 "rw_mbytes_per_sec": 0, 00:30:21.218 "w_mbytes_per_sec": 0 00:30:21.218 }, 00:30:21.218 "block_size": 4096, 00:30:21.218 "claimed": false, 00:30:21.218 "driver_specific": { 00:30:21.218 "lvol": { 00:30:21.218 "base_bdev": "aio_bdev", 00:30:21.218 "clone": false, 00:30:21.218 "esnap_clone": false, 00:30:21.218 "lvol_store_uuid": "35323fc2-5229-4e2a-b9e7-e29b8fa5a594", 00:30:21.218 "num_allocated_clusters": 38, 00:30:21.218 "snapshot": false, 00:30:21.218 "thin_provision": false 00:30:21.218 } 00:30:21.218 }, 00:30:21.218 "name": "4aa18bdb-c59f-4a4a-9348-5e895019e734", 00:30:21.218 "num_blocks": 38912, 00:30:21.218 "product_name": "Logical Volume", 00:30:21.218 "supported_io_types": { 00:30:21.218 "abort": false, 00:30:21.218 "compare": false, 00:30:21.218 "compare_and_write": false, 00:30:21.218 "copy": false, 00:30:21.218 "flush": false, 00:30:21.218 "get_zone_info": false, 00:30:21.218 "nvme_admin": false, 00:30:21.218 "nvme_io": false, 00:30:21.218 "nvme_io_md": false, 00:30:21.218 "nvme_iov_md": false, 00:30:21.218 "read": true, 00:30:21.218 "reset": true, 00:30:21.218 "seek_data": true, 00:30:21.218 "seek_hole": true, 00:30:21.218 "unmap": true, 00:30:21.218 "write": true, 00:30:21.218 "write_zeroes": true, 00:30:21.218 "zcopy": false, 00:30:21.218 "zone_append": false, 00:30:21.218 "zone_management": false 00:30:21.218 }, 00:30:21.218 "uuid": "4aa18bdb-c59f-4a4a-9348-5e895019e734", 00:30:21.218 "zoned": false 00:30:21.218 } 00:30:21.218 ] 00:30:21.475 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:21.475 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:21.475 13:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:21.733 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:21.733 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:21.733 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:21.991 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:21.991 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4aa18bdb-c59f-4a4a-9348-5e895019e734 00:30:22.249 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35323fc2-5229-4e2a-b9e7-e29b8fa5a594 00:30:22.506 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:22.763 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:23.344 ************************************ 00:30:23.344 END TEST lvs_grow_clean 00:30:23.344 ************************************ 00:30:23.344 00:30:23.344 real 0m19.007s 00:30:23.344 user 0m18.373s 00:30:23.344 sys 0m2.173s 00:30:23.344 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.344 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:23.344 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:23.344 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.344 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.344 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:23.344 ************************************ 00:30:23.344 START TEST lvs_grow_dirty 00:30:23.345 ************************************ 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:23.345 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:23.602 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:23.602 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:24.167 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=03f66125-e352-4ea6-8828-52d4004618fc 00:30:24.167 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:24.167 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:24.426 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:24.426 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:24.426 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 03f66125-e352-4ea6-8828-52d4004618fc lvol 150 00:30:24.683 13:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ddf4a737-c45e-4fad-baa2-333f7b8689c3 00:30:24.684 13:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:24.684 13:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:24.942 [2024-12-06 13:24:31.317883] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:24.942 [2024-12-06 13:24:31.318044] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:24.942 true 00:30:24.942 13:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:24.942 13:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:25.200 13:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:25.200 13:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:25.847 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ddf4a737-c45e-4fad-baa2-333f7b8689c3 00:30:25.847 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:26.105 [2024-12-06 13:24:32.554340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:26.105 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102941 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102941 /var/tmp/bdevperf.sock 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102941 ']' 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.364 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:26.622 [2024-12-06 13:24:32.930243] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:26.622 [2024-12-06 13:24:32.930333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102941 ] 00:30:26.622 [2024-12-06 13:24:33.078022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.622 [2024-12-06 13:24:33.127456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.879 13:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.879 13:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:26.879 13:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:27.137 Nvme0n1 00:30:27.137 13:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:27.395 [ 00:30:27.395 { 00:30:27.395 "aliases": [ 00:30:27.395 "ddf4a737-c45e-4fad-baa2-333f7b8689c3" 00:30:27.395 ], 00:30:27.395 "assigned_rate_limits": { 00:30:27.395 "r_mbytes_per_sec": 0, 00:30:27.395 "rw_ios_per_sec": 0, 00:30:27.395 "rw_mbytes_per_sec": 0, 00:30:27.395 "w_mbytes_per_sec": 0 00:30:27.395 }, 00:30:27.395 "block_size": 4096, 00:30:27.395 "claimed": false, 00:30:27.395 "driver_specific": { 00:30:27.395 "mp_policy": "active_passive", 00:30:27.395 "nvme": [ 00:30:27.395 { 00:30:27.395 "ctrlr_data": { 00:30:27.395 "ana_reporting": false, 00:30:27.395 "cntlid": 1, 00:30:27.395 "firmware_revision": "25.01", 00:30:27.395 "model_number": "SPDK bdev Controller", 00:30:27.395 "multi_ctrlr": true, 00:30:27.395 "oacs": { 00:30:27.395 "firmware": 0, 00:30:27.395 "format": 0, 00:30:27.395 "ns_manage": 0, 00:30:27.395 "security": 0 00:30:27.395 }, 00:30:27.395 "serial_number": "SPDK0", 00:30:27.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.395 "vendor_id": "0x8086" 00:30:27.395 }, 00:30:27.395 "ns_data": { 00:30:27.395 "can_share": true, 00:30:27.395 "id": 1 00:30:27.395 }, 00:30:27.395 "trid": { 00:30:27.395 "adrfam": "IPv4", 00:30:27.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.395 "traddr": "10.0.0.3", 00:30:27.395 "trsvcid": "4420", 00:30:27.395 "trtype": "TCP" 00:30:27.395 }, 00:30:27.395 "vs": { 00:30:27.395 "nvme_version": "1.3" 00:30:27.395 } 00:30:27.395 } 00:30:27.395 ] 00:30:27.395 }, 00:30:27.395 "memory_domains": [ 00:30:27.395 { 00:30:27.395 "dma_device_id": "system", 00:30:27.395 "dma_device_type": 1 00:30:27.395 } 00:30:27.395 ], 00:30:27.395 "name": "Nvme0n1", 00:30:27.395 "num_blocks": 38912, 00:30:27.395 "numa_id": -1, 00:30:27.395 "product_name": "NVMe disk", 00:30:27.395 "supported_io_types": { 00:30:27.395 "abort": true, 00:30:27.395 "compare": true, 00:30:27.395 "compare_and_write": true, 00:30:27.395 "copy": true, 00:30:27.395 "flush": true, 00:30:27.395 "get_zone_info": false, 00:30:27.395 "nvme_admin": true, 00:30:27.395 "nvme_io": true, 00:30:27.395 "nvme_io_md": false, 00:30:27.395 "nvme_iov_md": false, 00:30:27.395 "read": true, 00:30:27.395 "reset": true, 00:30:27.395 "seek_data": false, 00:30:27.395 "seek_hole": false, 00:30:27.395 "unmap": true, 00:30:27.395 "write": true, 00:30:27.395 "write_zeroes": true, 00:30:27.395 "zcopy": false, 00:30:27.395 "zone_append": false, 00:30:27.395 "zone_management": false 00:30:27.395 }, 00:30:27.395 "uuid": "ddf4a737-c45e-4fad-baa2-333f7b8689c3", 00:30:27.395 "zoned": false 00:30:27.395 } 00:30:27.395 ] 00:30:27.395 13:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102975 00:30:27.395 13:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:27.395 13:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:27.654 Running I/O for 10 seconds... 00:30:28.588 Latency(us) 00:30:28.588 [2024-12-06T13:24:35.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.588 Nvme0n1 : 1.00 6964.00 27.20 0.00 0.00 0.00 0.00 0.00 00:30:28.588 [2024-12-06T13:24:35.116Z] =================================================================================================================== 00:30:28.588 [2024-12-06T13:24:35.116Z] Total : 6964.00 27.20 0.00 0.00 0.00 0.00 0.00 00:30:28.588 00:30:29.519 13:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:29.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.519 Nvme0n1 : 2.00 7363.00 28.76 0.00 0.00 0.00 0.00 0.00 00:30:29.519 [2024-12-06T13:24:36.047Z] =================================================================================================================== 00:30:29.519 [2024-12-06T13:24:36.047Z] Total : 7363.00 28.76 0.00 0.00 0.00 0.00 0.00 00:30:29.519 00:30:29.775 true 00:30:29.775 13:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:29.775 13:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:30.340 13:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:30.340 13:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:30.340 13:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102975 00:30:30.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.597 Nvme0n1 : 3.00 7364.67 28.77 0.00 0.00 0.00 0.00 0.00 00:30:30.597 [2024-12-06T13:24:37.125Z] =================================================================================================================== 00:30:30.597 [2024-12-06T13:24:37.125Z] Total : 7364.67 28.77 0.00 0.00 0.00 0.00 0.00 00:30:30.597 00:30:31.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.530 Nvme0n1 : 4.00 7385.75 28.85 0.00 0.00 0.00 0.00 0.00 00:30:31.530 [2024-12-06T13:24:38.058Z] =================================================================================================================== 00:30:31.530 [2024-12-06T13:24:38.058Z] Total : 7385.75 28.85 0.00 0.00 0.00 0.00 0.00 00:30:31.530 00:30:32.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.466 Nvme0n1 : 5.00 7321.60 28.60 0.00 0.00 0.00 0.00 0.00 00:30:32.466 [2024-12-06T13:24:38.994Z] =================================================================================================================== 00:30:32.466 [2024-12-06T13:24:38.994Z] Total : 7321.60 28.60 0.00 0.00 0.00 0.00 0.00 00:30:32.466 00:30:33.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:33.841 Nvme0n1 : 6.00 7339.00 28.67 0.00 0.00 0.00 0.00 0.00 00:30:33.841 [2024-12-06T13:24:40.369Z] =================================================================================================================== 00:30:33.841 [2024-12-06T13:24:40.369Z] Total : 7339.00 28.67 0.00 0.00 0.00 0.00 0.00 00:30:33.841 00:30:34.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.783 Nvme0n1 : 7.00 7189.57 28.08 0.00 0.00 0.00 0.00 0.00 00:30:34.783 [2024-12-06T13:24:41.311Z] =================================================================================================================== 00:30:34.783 [2024-12-06T13:24:41.311Z] Total : 7189.57 28.08 0.00 0.00 0.00 0.00 0.00 00:30:34.783 00:30:35.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.750 Nvme0n1 : 8.00 7173.25 28.02 0.00 0.00 0.00 0.00 0.00 00:30:35.750 [2024-12-06T13:24:42.278Z] =================================================================================================================== 00:30:35.750 [2024-12-06T13:24:42.278Z] Total : 7173.25 28.02 0.00 0.00 0.00 0.00 0.00 00:30:35.750 00:30:36.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.684 Nvme0n1 : 9.00 7174.78 28.03 0.00 0.00 0.00 0.00 0.00 00:30:36.684 [2024-12-06T13:24:43.212Z] =================================================================================================================== 00:30:36.684 [2024-12-06T13:24:43.212Z] Total : 7174.78 28.03 0.00 0.00 0.00 0.00 0.00 00:30:36.684 00:30:37.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.639 Nvme0n1 : 10.00 7157.20 27.96 0.00 0.00 0.00 0.00 0.00 00:30:37.639 [2024-12-06T13:24:44.167Z] =================================================================================================================== 00:30:37.639 [2024-12-06T13:24:44.167Z] Total : 7157.20 27.96 0.00 0.00 0.00 0.00 0.00 00:30:37.639 00:30:37.639 00:30:37.639 Latency(us) 00:30:37.639 [2024-12-06T13:24:44.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.639 Nvme0n1 : 10.02 7158.44 27.96 0.00 0.00 17876.37 8460.10 183024.17 00:30:37.639 [2024-12-06T13:24:44.167Z] =================================================================================================================== 00:30:37.639 [2024-12-06T13:24:44.167Z] Total : 7158.44 27.96 0.00 0.00 17876.37 8460.10 183024.17 00:30:37.639 { 00:30:37.639 "results": [ 00:30:37.639 { 00:30:37.639 "job": "Nvme0n1", 00:30:37.639 "core_mask": "0x2", 00:30:37.639 "workload": "randwrite", 00:30:37.639 "status": "finished", 00:30:37.639 "queue_depth": 128, 00:30:37.639 "io_size": 4096, 00:30:37.639 "runtime": 10.016151, 00:30:37.639 "iops": 7158.438406130259, 00:30:37.639 "mibps": 27.962650023946324, 00:30:37.639 "io_failed": 0, 00:30:37.639 "io_timeout": 0, 00:30:37.639 "avg_latency_us": 17876.369887866113, 00:30:37.639 "min_latency_us": 8460.101818181818, 00:30:37.639 "max_latency_us": 183024.17454545456 00:30:37.639 } 00:30:37.639 ], 00:30:37.639 "core_count": 1 00:30:37.639 } 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102941 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102941 ']' 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102941 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102941 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:37.639 killing process with pid 102941 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102941' 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102941 00:30:37.639 Received shutdown signal, test time was about 10.000000 seconds 00:30:37.639 00:30:37.639 Latency(us) 00:30:37.639 [2024-12-06T13:24:44.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.639 [2024-12-06T13:24:44.167Z] =================================================================================================================== 00:30:37.639 [2024-12-06T13:24:44.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.639 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102941 00:30:37.898 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:38.155 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:38.413 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:38.413 13:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:38.670 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:38.670 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:38.670 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 102359 00:30:38.670 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 102359 00:30:38.928 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 102359 Killed "${NVMF_APP[@]}" "$@" 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=103132 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 103132 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103132 ']' 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.928 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:38.928 [2024-12-06 13:24:45.281762] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.928 [2024-12-06 13:24:45.283402] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:38.928 [2024-12-06 13:24:45.283502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.928 [2024-12-06 13:24:45.441908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.186 [2024-12-06 13:24:45.476828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.186 [2024-12-06 13:24:45.476887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.186 [2024-12-06 13:24:45.476898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.186 [2024-12-06 13:24:45.476907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.186 [2024-12-06 13:24:45.476914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.186 [2024-12-06 13:24:45.477201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.186 [2024-12-06 13:24:45.525845] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.186 [2024-12-06 13:24:45.526140] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.186 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.186 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:39.186 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.186 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.186 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:39.186 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.186 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.444 [2024-12-06 13:24:45.841568] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:39.444 [2024-12-06 13:24:45.841923] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:39.444 [2024-12-06 13:24:45.842157] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ddf4a737-c45e-4fad-baa2-333f7b8689c3 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ddf4a737-c45e-4fad-baa2-333f7b8689c3 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:39.444 13:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:39.701 13:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ddf4a737-c45e-4fad-baa2-333f7b8689c3 -t 2000 00:30:40.266 [ 00:30:40.266 { 00:30:40.266 "aliases": [ 00:30:40.266 "lvs/lvol" 00:30:40.266 ], 00:30:40.266 "assigned_rate_limits": { 00:30:40.266 "r_mbytes_per_sec": 0, 00:30:40.266 "rw_ios_per_sec": 0, 00:30:40.266 "rw_mbytes_per_sec": 0, 00:30:40.266 "w_mbytes_per_sec": 0 00:30:40.266 }, 00:30:40.266 "block_size": 4096, 00:30:40.266 "claimed": false, 00:30:40.266 "driver_specific": { 00:30:40.266 "lvol": { 00:30:40.266 "base_bdev": "aio_bdev", 00:30:40.266 "clone": false, 00:30:40.266 "esnap_clone": false, 00:30:40.266 "lvol_store_uuid": "03f66125-e352-4ea6-8828-52d4004618fc", 00:30:40.266 "num_allocated_clusters": 38, 00:30:40.266 "snapshot": false, 00:30:40.266 "thin_provision": false 00:30:40.266 } 00:30:40.266 }, 00:30:40.266 "name": "ddf4a737-c45e-4fad-baa2-333f7b8689c3", 00:30:40.266 "num_blocks": 38912, 00:30:40.266 "product_name": "Logical Volume", 00:30:40.266 "supported_io_types": { 00:30:40.266 "abort": false, 00:30:40.266 "compare": false, 00:30:40.266 "compare_and_write": false, 00:30:40.266 "copy": false, 00:30:40.266 "flush": false, 00:30:40.266 "get_zone_info": false, 00:30:40.266 "nvme_admin": false, 00:30:40.266 "nvme_io": false, 00:30:40.266 "nvme_io_md": false, 00:30:40.266 "nvme_iov_md": false, 00:30:40.266 "read": true, 00:30:40.266 "reset": true, 00:30:40.266 "seek_data": true, 00:30:40.266 "seek_hole": true, 00:30:40.266 "unmap": true, 00:30:40.266 "write": true, 00:30:40.266 "write_zeroes": true, 00:30:40.266 "zcopy": false, 00:30:40.266 "zone_append": false, 00:30:40.266 "zone_management": false 00:30:40.266 }, 00:30:40.266 "uuid": "ddf4a737-c45e-4fad-baa2-333f7b8689c3", 00:30:40.266 "zoned": false 00:30:40.266 } 00:30:40.266 ] 00:30:40.266 13:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:40.266 13:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:40.266 13:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:40.525 13:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:40.525 13:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:40.525 13:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:40.783 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:40.783 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:41.041 [2024-12-06 13:24:47.325845] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:41.041 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:41.041 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:41.042 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:41.299 2024/12/06 13:24:47 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:03f66125-e352-4ea6-8828-52d4004618fc], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:30:41.299 request: 00:30:41.299 { 00:30:41.299 "method": "bdev_lvol_get_lvstores", 00:30:41.299 "params": { 00:30:41.299 "uuid": "03f66125-e352-4ea6-8828-52d4004618fc" 00:30:41.299 } 00:30:41.299 } 00:30:41.299 Got JSON-RPC error response 00:30:41.299 GoRPCClient: error on JSON-RPC call 00:30:41.299 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:41.299 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:41.299 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:41.299 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:41.300 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:41.558 aio_bdev 00:30:41.558 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ddf4a737-c45e-4fad-baa2-333f7b8689c3 00:30:41.558 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ddf4a737-c45e-4fad-baa2-333f7b8689c3 00:30:41.558 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.558 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:41.558 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.558 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.558 13:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:41.816 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ddf4a737-c45e-4fad-baa2-333f7b8689c3 -t 2000 00:30:42.075 [ 00:30:42.075 { 00:30:42.075 "aliases": [ 00:30:42.075 "lvs/lvol" 00:30:42.075 ], 00:30:42.075 "assigned_rate_limits": { 00:30:42.075 "r_mbytes_per_sec": 0, 00:30:42.075 "rw_ios_per_sec": 0, 00:30:42.075 "rw_mbytes_per_sec": 0, 00:30:42.075 "w_mbytes_per_sec": 0 00:30:42.075 }, 00:30:42.075 "block_size": 4096, 00:30:42.075 "claimed": false, 00:30:42.075 "driver_specific": { 00:30:42.075 "lvol": { 00:30:42.075 "base_bdev": "aio_bdev", 00:30:42.075 "clone": false, 00:30:42.075 "esnap_clone": false, 00:30:42.075 "lvol_store_uuid": "03f66125-e352-4ea6-8828-52d4004618fc", 00:30:42.075 "num_allocated_clusters": 38, 00:30:42.075 "snapshot": false, 00:30:42.075 "thin_provision": false 00:30:42.075 } 00:30:42.075 }, 00:30:42.075 "name": "ddf4a737-c45e-4fad-baa2-333f7b8689c3", 00:30:42.075 "num_blocks": 38912, 00:30:42.075 "product_name": "Logical Volume", 00:30:42.075 "supported_io_types": { 00:30:42.075 "abort": false, 00:30:42.075 "compare": false, 00:30:42.075 "compare_and_write": false, 00:30:42.075 "copy": false, 00:30:42.075 "flush": false, 00:30:42.075 "get_zone_info": false, 00:30:42.075 "nvme_admin": false, 00:30:42.075 "nvme_io": false, 00:30:42.075 "nvme_io_md": false, 00:30:42.075 "nvme_iov_md": false, 00:30:42.075 "read": true, 00:30:42.075 "reset": true, 00:30:42.075 "seek_data": true, 00:30:42.075 "seek_hole": true, 00:30:42.075 "unmap": true, 00:30:42.075 "write": true, 00:30:42.075 "write_zeroes": true, 00:30:42.075 "zcopy": false, 00:30:42.075 "zone_append": false, 00:30:42.075 "zone_management": false 00:30:42.075 }, 00:30:42.075 "uuid": "ddf4a737-c45e-4fad-baa2-333f7b8689c3", 00:30:42.075 "zoned": false 00:30:42.075 } 00:30:42.075 ] 00:30:42.075 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:42.075 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:42.075 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:42.333 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:42.333 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:42.333 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:42.592 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:42.592 13:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ddf4a737-c45e-4fad-baa2-333f7b8689c3 00:30:42.851 13:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03f66125-e352-4ea6-8828-52d4004618fc 00:30:43.109 13:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:43.368 13:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:43.935 00:30:43.935 real 0m20.416s 00:30:43.935 user 0m28.546s 00:30:43.935 sys 0m7.825s 00:30:43.935 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.935 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:43.935 ************************************ 00:30:43.935 END TEST lvs_grow_dirty 00:30:43.935 ************************************ 00:30:43.935 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:43.935 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:43.936 nvmf_trace.0 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.936 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.503 rmmod nvme_tcp 00:30:44.503 rmmod nvme_fabrics 00:30:44.503 rmmod nvme_keyring 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 103132 ']' 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 103132 00:30:44.503 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 103132 ']' 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 103132 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103132 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:44.504 killing process with pid 103132 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103132' 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 103132 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 103132 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:44.504 13:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:44.504 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:30:44.762 00:30:44.762 real 0m41.873s 00:30:44.762 user 0m48.179s 00:30:44.762 sys 0m11.042s 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.762 ************************************ 00:30:44.762 END TEST nvmf_lvs_grow 00:30:44.762 ************************************ 00:30:44.762 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:44.763 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:44.763 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:44.763 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.763 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:44.763 ************************************ 00:30:44.763 START TEST nvmf_bdev_io_wait 00:30:44.763 ************************************ 00:30:44.763 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:45.022 * Looking for test storage... 00:30:45.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:45.022 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.023 --rc genhtml_branch_coverage=1 00:30:45.023 --rc genhtml_function_coverage=1 00:30:45.023 --rc genhtml_legend=1 00:30:45.023 --rc geninfo_all_blocks=1 00:30:45.023 --rc geninfo_unexecuted_blocks=1 00:30:45.023 00:30:45.023 ' 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.023 --rc genhtml_branch_coverage=1 00:30:45.023 --rc genhtml_function_coverage=1 00:30:45.023 --rc genhtml_legend=1 00:30:45.023 --rc geninfo_all_blocks=1 00:30:45.023 --rc geninfo_unexecuted_blocks=1 00:30:45.023 00:30:45.023 ' 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.023 --rc genhtml_branch_coverage=1 00:30:45.023 --rc genhtml_function_coverage=1 00:30:45.023 --rc genhtml_legend=1 00:30:45.023 --rc geninfo_all_blocks=1 00:30:45.023 --rc geninfo_unexecuted_blocks=1 00:30:45.023 00:30:45.023 ' 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.023 --rc genhtml_branch_coverage=1 00:30:45.023 --rc genhtml_function_coverage=1 00:30:45.023 --rc genhtml_legend=1 00:30:45.023 --rc geninfo_all_blocks=1 00:30:45.023 --rc geninfo_unexecuted_blocks=1 00:30:45.023 00:30:45.023 ' 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.023 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:45.024 Cannot find device "nvmf_init_br" 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:45.024 Cannot find device "nvmf_init_br2" 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:30:45.024 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:45.283 Cannot find device "nvmf_tgt_br" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:45.283 Cannot find device "nvmf_tgt_br2" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:45.283 Cannot find device "nvmf_init_br" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:45.283 Cannot find device "nvmf_init_br2" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:45.283 Cannot find device "nvmf_tgt_br" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:45.283 Cannot find device "nvmf_tgt_br2" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:45.283 Cannot find device "nvmf_br" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:45.283 Cannot find device "nvmf_init_if" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:45.283 Cannot find device "nvmf_init_if2" 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:45.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:45.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:45.283 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:45.542 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:45.543 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:45.543 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:30:45.543 00:30:45.543 --- 10.0.0.3 ping statistics --- 00:30:45.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.543 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:45.543 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:45.543 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:30:45.543 00:30:45.543 --- 10.0.0.4 ping statistics --- 00:30:45.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.543 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:45.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:30:45.543 00:30:45.543 --- 10.0.0.1 ping statistics --- 00:30:45.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.543 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:45.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:30:45.543 00:30:45.543 --- 10.0.0.2 ping statistics --- 00:30:45.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.543 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=103584 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 103584 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 103584 ']' 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.543 13:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:45.543 [2024-12-06 13:24:51.971089] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.543 [2024-12-06 13:24:51.972256] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:45.543 [2024-12-06 13:24:51.972330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.802 [2024-12-06 13:24:52.122119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.802 [2024-12-06 13:24:52.161971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.802 [2024-12-06 13:24:52.162035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.802 [2024-12-06 13:24:52.162049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.802 [2024-12-06 13:24:52.162059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.802 [2024-12-06 13:24:52.162068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.802 [2024-12-06 13:24:52.162933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.802 [2024-12-06 13:24:52.163021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.802 [2024-12-06 13:24:52.163110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.802 [2024-12-06 13:24:52.163112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.802 [2024-12-06 13:24:52.164279] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.802 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.062 [2024-12-06 13:24:52.352655] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:46.062 [2024-12-06 13:24:52.352807] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:46.062 [2024-12-06 13:24:52.353376] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:46.062 [2024-12-06 13:24:52.354012] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.062 [2024-12-06 13:24:52.364518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.062 Malloc0 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.062 [2024-12-06 13:24:52.420850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=103625 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=103627 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.062 { 00:30:46.062 "params": { 00:30:46.062 "name": "Nvme$subsystem", 00:30:46.062 "trtype": "$TEST_TRANSPORT", 00:30:46.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.062 "adrfam": "ipv4", 00:30:46.062 "trsvcid": "$NVMF_PORT", 00:30:46.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.062 "hdgst": ${hdgst:-false}, 00:30:46.062 "ddgst": ${ddgst:-false} 00:30:46.062 }, 00:30:46.062 "method": "bdev_nvme_attach_controller" 00:30:46.062 } 00:30:46.062 EOF 00:30:46.062 )") 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=103629 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.062 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.062 { 00:30:46.062 "params": { 00:30:46.062 "name": "Nvme$subsystem", 00:30:46.062 "trtype": "$TEST_TRANSPORT", 00:30:46.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.062 "adrfam": "ipv4", 00:30:46.062 "trsvcid": "$NVMF_PORT", 00:30:46.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.062 "hdgst": ${hdgst:-false}, 00:30:46.062 "ddgst": ${ddgst:-false} 00:30:46.062 }, 00:30:46.062 "method": "bdev_nvme_attach_controller" 00:30:46.062 } 00:30:46.062 EOF 00:30:46.063 )") 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=103632 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.063 { 00:30:46.063 "params": { 00:30:46.063 "name": "Nvme$subsystem", 00:30:46.063 "trtype": "$TEST_TRANSPORT", 00:30:46.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.063 "adrfam": "ipv4", 00:30:46.063 "trsvcid": "$NVMF_PORT", 00:30:46.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.063 "hdgst": ${hdgst:-false}, 00:30:46.063 "ddgst": ${ddgst:-false} 00:30:46.063 }, 00:30:46.063 "method": "bdev_nvme_attach_controller" 00:30:46.063 } 00:30:46.063 EOF 00:30:46.063 )") 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.063 "params": { 00:30:46.063 "name": "Nvme1", 00:30:46.063 "trtype": "tcp", 00:30:46.063 "traddr": "10.0.0.3", 00:30:46.063 "adrfam": "ipv4", 00:30:46.063 "trsvcid": "4420", 00:30:46.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.063 "hdgst": false, 00:30:46.063 "ddgst": false 00:30:46.063 }, 00:30:46.063 "method": "bdev_nvme_attach_controller" 00:30:46.063 }' 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.063 { 00:30:46.063 "params": { 00:30:46.063 "name": "Nvme$subsystem", 00:30:46.063 "trtype": "$TEST_TRANSPORT", 00:30:46.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.063 "adrfam": "ipv4", 00:30:46.063 "trsvcid": "$NVMF_PORT", 00:30:46.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.063 "hdgst": ${hdgst:-false}, 00:30:46.063 "ddgst": ${ddgst:-false} 00:30:46.063 }, 00:30:46.063 "method": "bdev_nvme_attach_controller" 00:30:46.063 } 00:30:46.063 EOF 00:30:46.063 )") 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.063 "params": { 00:30:46.063 "name": "Nvme1", 00:30:46.063 "trtype": "tcp", 00:30:46.063 "traddr": "10.0.0.3", 00:30:46.063 "adrfam": "ipv4", 00:30:46.063 "trsvcid": "4420", 00:30:46.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.063 "hdgst": false, 00:30:46.063 "ddgst": false 00:30:46.063 }, 00:30:46.063 "method": "bdev_nvme_attach_controller" 00:30:46.063 }' 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.063 "params": { 00:30:46.063 "name": "Nvme1", 00:30:46.063 "trtype": "tcp", 00:30:46.063 "traddr": "10.0.0.3", 00:30:46.063 "adrfam": "ipv4", 00:30:46.063 "trsvcid": "4420", 00:30:46.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.063 "hdgst": false, 00:30:46.063 "ddgst": false 00:30:46.063 }, 00:30:46.063 "method": "bdev_nvme_attach_controller" 00:30:46.063 }' 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.063 "params": { 00:30:46.063 "name": "Nvme1", 00:30:46.063 "trtype": "tcp", 00:30:46.063 "traddr": "10.0.0.3", 00:30:46.063 "adrfam": "ipv4", 00:30:46.063 "trsvcid": "4420", 00:30:46.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.063 "hdgst": false, 00:30:46.063 "ddgst": false 00:30:46.063 }, 00:30:46.063 "method": "bdev_nvme_attach_controller" 00:30:46.063 }' 00:30:46.063 [2024-12-06 13:24:52.487220] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:46.063 [2024-12-06 13:24:52.487349] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:46.063 13:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 103625 00:30:46.063 [2024-12-06 13:24:52.503310] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:46.063 [2024-12-06 13:24:52.503405] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:46.063 [2024-12-06 13:24:52.507758] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:46.063 [2024-12-06 13:24:52.507864] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:46.063 [2024-12-06 13:24:52.524367] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:46.063 [2024-12-06 13:24:52.524922] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:46.322 [2024-12-06 13:24:52.678077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.322 [2024-12-06 13:24:52.704131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:46.322 [2024-12-06 13:24:52.767690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.322 [2024-12-06 13:24:52.776920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.322 [2024-12-06 13:24:52.803113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:46.322 [2024-12-06 13:24:52.813085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.322 [2024-12-06 13:24:52.813759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:46.322 Running I/O for 1 seconds... 00:30:46.322 [2024-12-06 13:24:52.844655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:46.581 Running I/O for 1 seconds... 00:30:46.581 Running I/O for 1 seconds... 00:30:46.581 Running I/O for 1 seconds... 00:30:47.518 10138.00 IOPS, 39.60 MiB/s 00:30:47.518 Latency(us) 00:30:47.518 [2024-12-06T13:24:54.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.518 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:47.518 Nvme1n1 : 1.01 10174.16 39.74 0.00 0.00 12522.30 4021.53 14894.55 00:30:47.518 [2024-12-06T13:24:54.046Z] =================================================================================================================== 00:30:47.518 [2024-12-06T13:24:54.046Z] Total : 10174.16 39.74 0.00 0.00 12522.30 4021.53 14894.55 00:30:47.518 176072.00 IOPS, 687.78 MiB/s 00:30:47.518 Latency(us) 00:30:47.518 [2024-12-06T13:24:54.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.518 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:47.518 Nvme1n1 : 1.00 175726.09 686.43 0.00 0.00 724.20 301.61 1951.19 00:30:47.518 [2024-12-06T13:24:54.046Z] =================================================================================================================== 00:30:47.518 [2024-12-06T13:24:54.046Z] Total : 175726.09 686.43 0.00 0.00 724.20 301.61 1951.19 00:30:47.518 8150.00 IOPS, 31.84 MiB/s 00:30:47.518 Latency(us) 00:30:47.518 [2024-12-06T13:24:54.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.518 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:47.518 Nvme1n1 : 1.01 8224.50 32.13 0.00 0.00 15499.00 2472.49 19779.96 00:30:47.518 [2024-12-06T13:24:54.046Z] =================================================================================================================== 00:30:47.518 [2024-12-06T13:24:54.046Z] Total : 8224.50 32.13 0.00 0.00 15499.00 2472.49 19779.96 00:30:47.518 7837.00 IOPS, 30.61 MiB/s 00:30:47.518 Latency(us) 00:30:47.518 [2024-12-06T13:24:54.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.518 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:47.518 Nvme1n1 : 1.01 7929.50 30.97 0.00 0.00 16093.95 2770.39 23950.43 00:30:47.518 [2024-12-06T13:24:54.046Z] =================================================================================================================== 00:30:47.518 [2024-12-06T13:24:54.046Z] Total : 7929.50 30.97 0.00 0.00 16093.95 2770.39 23950.43 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 103627 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 103629 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 103632 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.777 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.778 rmmod nvme_tcp 00:30:47.778 rmmod nvme_fabrics 00:30:47.778 rmmod nvme_keyring 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 103584 ']' 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 103584 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 103584 ']' 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 103584 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103584 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.778 killing process with pid 103584 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103584' 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 103584 00:30:47.778 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 103584 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:48.037 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:30:48.296 00:30:48.296 real 0m3.339s 00:30:48.296 user 0m11.516s 00:30:48.296 sys 0m2.395s 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:48.296 ************************************ 00:30:48.296 END TEST nvmf_bdev_io_wait 00:30:48.296 ************************************ 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:48.296 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.297 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:48.297 ************************************ 00:30:48.297 START TEST nvmf_queue_depth 00:30:48.297 ************************************ 00:30:48.297 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:48.297 * Looking for test storage... 00:30:48.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:48.297 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:48.297 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:48.297 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:48.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.557 --rc genhtml_branch_coverage=1 00:30:48.557 --rc genhtml_function_coverage=1 00:30:48.557 --rc genhtml_legend=1 00:30:48.557 --rc geninfo_all_blocks=1 00:30:48.557 --rc geninfo_unexecuted_blocks=1 00:30:48.557 00:30:48.557 ' 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:48.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.557 --rc genhtml_branch_coverage=1 00:30:48.557 --rc genhtml_function_coverage=1 00:30:48.557 --rc genhtml_legend=1 00:30:48.557 --rc geninfo_all_blocks=1 00:30:48.557 --rc geninfo_unexecuted_blocks=1 00:30:48.557 00:30:48.557 ' 00:30:48.557 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:48.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.557 --rc genhtml_branch_coverage=1 00:30:48.557 --rc genhtml_function_coverage=1 00:30:48.557 --rc genhtml_legend=1 00:30:48.557 --rc geninfo_all_blocks=1 00:30:48.557 --rc geninfo_unexecuted_blocks=1 00:30:48.557 00:30:48.557 ' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:48.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.558 --rc genhtml_branch_coverage=1 00:30:48.558 --rc genhtml_function_coverage=1 00:30:48.558 --rc genhtml_legend=1 00:30:48.558 --rc geninfo_all_blocks=1 00:30:48.558 --rc geninfo_unexecuted_blocks=1 00:30:48.558 00:30:48.558 ' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:48.558 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:48.559 Cannot find device "nvmf_init_br" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:48.559 Cannot find device "nvmf_init_br2" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:48.559 Cannot find device "nvmf_tgt_br" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:48.559 Cannot find device "nvmf_tgt_br2" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:48.559 Cannot find device "nvmf_init_br" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:48.559 Cannot find device "nvmf_init_br2" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:48.559 Cannot find device "nvmf_tgt_br" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:48.559 Cannot find device "nvmf_tgt_br2" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:48.559 Cannot find device "nvmf_br" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:48.559 Cannot find device "nvmf_init_if" 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:30:48.559 13:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:48.559 Cannot find device "nvmf_init_if2" 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:48.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:48.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:48.559 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:48.819 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:48.819 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:30:48.819 00:30:48.819 --- 10.0.0.3 ping statistics --- 00:30:48.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.819 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:30:48.819 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:48.819 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:48.819 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:30:48.819 00:30:48.819 --- 10.0.0.4 ping statistics --- 00:30:48.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.819 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:48.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:30:48.820 00:30:48.820 --- 10.0.0.1 ping statistics --- 00:30:48.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.820 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:48.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:30:48.820 00:30:48.820 --- 10.0.0.2 ping statistics --- 00:30:48.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.820 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=103890 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 103890 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103890 ']' 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.820 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 [2024-12-06 13:24:55.348587] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:49.079 [2024-12-06 13:24:55.349642] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:49.079 [2024-12-06 13:24:55.349706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.079 [2024-12-06 13:24:55.504554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.079 [2024-12-06 13:24:55.542650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.079 [2024-12-06 13:24:55.542719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.079 [2024-12-06 13:24:55.542732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.079 [2024-12-06 13:24:55.542742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.079 [2024-12-06 13:24:55.542751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.079 [2024-12-06 13:24:55.543100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.079 [2024-12-06 13:24:55.597827] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:49.079 [2024-12-06 13:24:55.598189] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.337 [2024-12-06 13:24:55.703976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.337 Malloc0 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.337 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.338 [2024-12-06 13:24:55.771973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103925 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103925 /var/tmp/bdevperf.sock 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103925 ']' 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:49.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.338 13:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.338 [2024-12-06 13:24:55.838809] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:49.338 [2024-12-06 13:24:55.838910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103925 ] 00:30:49.597 [2024-12-06 13:24:55.988669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.597 [2024-12-06 13:24:56.028376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.856 13:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.856 13:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:49.856 13:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.856 13:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.856 13:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:49.856 NVMe0n1 00:30:49.856 13:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.856 13:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:49.856 Running I/O for 10 seconds... 00:30:52.167 7703.00 IOPS, 30.09 MiB/s [2024-12-06T13:24:59.632Z] 7716.00 IOPS, 30.14 MiB/s [2024-12-06T13:25:00.569Z] 7916.67 IOPS, 30.92 MiB/s [2024-12-06T13:25:01.504Z] 8120.25 IOPS, 31.72 MiB/s [2024-12-06T13:25:02.441Z] 8079.60 IOPS, 31.56 MiB/s [2024-12-06T13:25:03.404Z] 8081.67 IOPS, 31.57 MiB/s [2024-12-06T13:25:04.782Z] 8089.71 IOPS, 31.60 MiB/s [2024-12-06T13:25:05.715Z] 8171.25 IOPS, 31.92 MiB/s [2024-12-06T13:25:06.648Z] 8212.78 IOPS, 32.08 MiB/s [2024-12-06T13:25:06.648Z] 8263.20 IOPS, 32.28 MiB/s 00:31:00.120 Latency(us) 00:31:00.120 [2024-12-06T13:25:06.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.120 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:00.120 Verification LBA range: start 0x0 length 0x4000 00:31:00.120 NVMe0n1 : 10.08 8292.94 32.39 0.00 0.00 122866.22 26691.03 115343.36 00:31:00.120 [2024-12-06T13:25:06.648Z] =================================================================================================================== 00:31:00.120 [2024-12-06T13:25:06.648Z] Total : 8292.94 32.39 0.00 0.00 122866.22 26691.03 115343.36 00:31:00.120 { 00:31:00.120 "results": [ 00:31:00.120 { 00:31:00.120 "job": "NVMe0n1", 00:31:00.120 "core_mask": "0x1", 00:31:00.120 "workload": "verify", 00:31:00.120 "status": "finished", 00:31:00.120 "verify_range": { 00:31:00.120 "start": 0, 00:31:00.120 "length": 16384 00:31:00.120 }, 00:31:00.120 "queue_depth": 1024, 00:31:00.120 "io_size": 4096, 00:31:00.120 "runtime": 10.078096, 00:31:00.120 "iops": 8292.935490989568, 00:31:00.120 "mibps": 32.394279261678, 00:31:00.120 "io_failed": 0, 00:31:00.120 "io_timeout": 0, 00:31:00.120 "avg_latency_us": 122866.21590254821, 00:31:00.120 "min_latency_us": 26691.025454545455, 00:31:00.120 "max_latency_us": 115343.36 00:31:00.120 } 00:31:00.120 ], 00:31:00.120 "core_count": 1 00:31:00.120 } 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103925 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103925 ']' 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103925 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103925 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.120 killing process with pid 103925 00:31:00.120 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103925' 00:31:00.120 Received shutdown signal, test time was about 10.000000 seconds 00:31:00.120 00:31:00.121 Latency(us) 00:31:00.121 [2024-12-06T13:25:06.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.121 [2024-12-06T13:25:06.649Z] =================================================================================================================== 00:31:00.121 [2024-12-06T13:25:06.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:00.121 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103925 00:31:00.121 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103925 00:31:00.121 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:00.121 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:00.121 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:00.121 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:00.378 rmmod nvme_tcp 00:31:00.378 rmmod nvme_fabrics 00:31:00.378 rmmod nvme_keyring 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 103890 ']' 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 103890 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103890 ']' 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103890 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103890 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:00.378 killing process with pid 103890 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103890' 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103890 00:31:00.378 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103890 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:00.636 13:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:00.636 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:00.636 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:00.636 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:00.636 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:00.636 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:00.637 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:00.637 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:00.637 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:00.637 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:31:00.895 00:31:00.895 real 0m12.528s 00:31:00.895 user 0m21.054s 00:31:00.895 sys 0m2.140s 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.895 ************************************ 00:31:00.895 END TEST nvmf_queue_depth 00:31:00.895 ************************************ 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:00.895 ************************************ 00:31:00.895 START TEST nvmf_target_multipath 00:31:00.895 ************************************ 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:00.895 * Looking for test storage... 00:31:00.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:00.895 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:01.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.154 --rc genhtml_branch_coverage=1 00:31:01.154 --rc genhtml_function_coverage=1 00:31:01.154 --rc genhtml_legend=1 00:31:01.154 --rc geninfo_all_blocks=1 00:31:01.154 --rc geninfo_unexecuted_blocks=1 00:31:01.154 00:31:01.154 ' 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:01.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.154 --rc genhtml_branch_coverage=1 00:31:01.154 --rc genhtml_function_coverage=1 00:31:01.154 --rc genhtml_legend=1 00:31:01.154 --rc geninfo_all_blocks=1 00:31:01.154 --rc geninfo_unexecuted_blocks=1 00:31:01.154 00:31:01.154 ' 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:01.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.154 --rc genhtml_branch_coverage=1 00:31:01.154 --rc genhtml_function_coverage=1 00:31:01.154 --rc genhtml_legend=1 00:31:01.154 --rc geninfo_all_blocks=1 00:31:01.154 --rc geninfo_unexecuted_blocks=1 00:31:01.154 00:31:01.154 ' 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:01.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.154 --rc genhtml_branch_coverage=1 00:31:01.154 --rc genhtml_function_coverage=1 00:31:01.154 --rc genhtml_legend=1 00:31:01.154 --rc geninfo_all_blocks=1 00:31:01.154 --rc geninfo_unexecuted_blocks=1 00:31:01.154 00:31:01.154 ' 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:01.154 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:01.155 Cannot find device "nvmf_init_br" 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:01.155 Cannot find device "nvmf_init_br2" 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:01.155 Cannot find device "nvmf_tgt_br" 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:31:01.155 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:01.156 Cannot find device "nvmf_tgt_br2" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:01.156 Cannot find device "nvmf_init_br" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:01.156 Cannot find device "nvmf_init_br2" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:01.156 Cannot find device "nvmf_tgt_br" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:01.156 Cannot find device "nvmf_tgt_br2" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:01.156 Cannot find device "nvmf_br" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:01.156 Cannot find device "nvmf_init_if" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:01.156 Cannot find device "nvmf_init_if2" 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:01.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:01.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:01.156 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:01.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:01.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:31:01.415 00:31:01.415 --- 10.0.0.3 ping statistics --- 00:31:01.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.415 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:01.415 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:01.415 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:31:01.415 00:31:01.415 --- 10.0.0.4 ping statistics --- 00:31:01.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.415 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:01.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:31:01.415 00:31:01.415 --- 10.0.0.1 ping statistics --- 00:31:01.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.415 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:01.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:31:01.415 00:31:01.415 --- 10.0.0.2 ping statistics --- 00:31:01.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.415 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=104295 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 104295 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 104295 ']' 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.415 13:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:01.415 [2024-12-06 13:25:07.922104] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:01.415 [2024-12-06 13:25:07.923405] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:31:01.415 [2024-12-06 13:25:07.923481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.673 [2024-12-06 13:25:08.077216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:01.673 [2024-12-06 13:25:08.118037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.673 [2024-12-06 13:25:08.118096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.673 [2024-12-06 13:25:08.118111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.673 [2024-12-06 13:25:08.118121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.673 [2024-12-06 13:25:08.118130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.673 [2024-12-06 13:25:08.118993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.673 [2024-12-06 13:25:08.119131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.673 [2024-12-06 13:25:08.119200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.673 [2024-12-06 13:25:08.119199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.673 [2024-12-06 13:25:08.177058] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:01.673 [2024-12-06 13:25:08.177502] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:01.673 [2024-12-06 13:25:08.177537] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:01.673 [2024-12-06 13:25:08.177875] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:01.673 [2024-12-06 13:25:08.178545] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:02.607 13:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.608 13:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:31:02.608 13:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.608 13:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.608 13:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:02.608 13:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.608 13:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:02.866 [2024-12-06 13:25:09.264668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.866 13:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:03.125 Malloc0 00:31:03.384 13:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:31:03.642 13:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:03.900 13:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:04.175 [2024-12-06 13:25:10.624686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:04.175 13:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:31:04.440 [2024-12-06 13:25:10.896753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:31:04.440 13:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:31:04.699 13:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:31:04.699 13:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:31:04.699 13:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:31:04.699 13:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:04.699 13:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:04.699 13:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=104436 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:31:07.244 13:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:31:07.244 [global] 00:31:07.244 thread=1 00:31:07.244 invalidate=1 00:31:07.244 rw=randrw 00:31:07.244 time_based=1 00:31:07.244 runtime=6 00:31:07.244 ioengine=libaio 00:31:07.244 direct=1 00:31:07.244 bs=4096 00:31:07.244 iodepth=128 00:31:07.244 norandommap=0 00:31:07.244 numjobs=1 00:31:07.244 00:31:07.244 verify_dump=1 00:31:07.244 verify_backlog=512 00:31:07.244 verify_state_save=0 00:31:07.244 do_verify=1 00:31:07.244 verify=crc32c-intel 00:31:07.244 [job0] 00:31:07.244 filename=/dev/nvme0n1 00:31:07.244 Could not set queue depth (nvme0n1) 00:31:07.244 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:07.244 fio-3.35 00:31:07.244 Starting 1 thread 00:31:07.812 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:31:08.070 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:31:08.637 13:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:31:09.570 13:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:31:09.570 13:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:09.570 13:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:31:09.570 13:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:31:09.829 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:31:10.395 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:31:10.395 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:31:10.395 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:10.396 13:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:31:11.332 13:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:31:11.332 13:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:11.332 13:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:11.332 13:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 104436 00:31:13.246 00:31:13.246 job0: (groupid=0, jobs=1): err= 0: pid=104457: Fri Dec 6 13:25:19 2024 00:31:13.246 read: IOPS=9895, BW=38.7MiB/s (40.5MB/s)(232MiB/6007msec) 00:31:13.246 slat (usec): min=2, max=7680, avg=56.47, stdev=274.74 00:31:13.246 clat (usec): min=798, max=26718, avg=8669.48, stdev=1875.53 00:31:13.246 lat (usec): min=843, max=26728, avg=8725.95, stdev=1890.48 00:31:13.246 clat percentiles (usec): 00:31:13.246 | 1.00th=[ 4621], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7373], 00:31:13.247 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:31:13.247 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[11076], 95.00th=[11994], 00:31:13.247 | 99.00th=[14353], 99.50th=[15664], 99.90th=[18482], 99.95th=[20317], 00:31:13.247 | 99.99th=[25560] 00:31:13.247 bw ( KiB/s): min=11768, max=24848, per=51.23%, avg=20280.91, stdev=3557.97, samples=11 00:31:13.247 iops : min= 2942, max= 6212, avg=5070.18, stdev=889.45, samples=11 00:31:13.247 write: IOPS=5611, BW=21.9MiB/s (23.0MB/s)(122MiB/5584msec); 0 zone resets 00:31:13.247 slat (usec): min=6, max=2925, avg=70.66, stdev=149.28 00:31:13.247 clat (usec): min=604, max=24690, avg=7866.58, stdev=1706.13 00:31:13.247 lat (usec): min=662, max=24718, avg=7937.24, stdev=1716.41 00:31:13.247 clat percentiles (usec): 00:31:13.247 | 1.00th=[ 3523], 5.00th=[ 5211], 10.00th=[ 6259], 20.00th=[ 6915], 00:31:13.247 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 7963], 00:31:13.247 | 70.00th=[ 8291], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[10683], 00:31:13.247 | 99.00th=[13042], 99.50th=[14091], 99.90th=[18482], 99.95th=[22676], 00:31:13.247 | 99.99th=[23725] 00:31:13.247 bw ( KiB/s): min=12376, max=24478, per=90.59%, avg=20335.82, stdev=3334.43, samples=11 00:31:13.247 iops : min= 3094, max= 6119, avg=5083.91, stdev=833.55, samples=11 00:31:13.247 lat (usec) : 750=0.01%, 1000=0.01% 00:31:13.247 lat (msec) : 2=0.13%, 4=0.75%, 10=83.92%, 20=15.12%, 50=0.07% 00:31:13.247 cpu : usr=6.23%, sys=26.59%, ctx=7284, majf=0, minf=66 00:31:13.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:13.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:13.247 issued rwts: total=59444,31335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:13.247 00:31:13.247 Run status group 0 (all jobs): 00:31:13.247 READ: bw=38.7MiB/s (40.5MB/s), 38.7MiB/s-38.7MiB/s (40.5MB/s-40.5MB/s), io=232MiB (243MB), run=6007-6007msec 00:31:13.247 WRITE: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=122MiB (128MB), run=5584-5584msec 00:31:13.247 00:31:13.247 Disk stats (read/write): 00:31:13.247 nvme0n1: ios=58599/30642, merge=0/0, ticks=472385/228178, in_queue=700563, util=98.68% 00:31:13.247 13:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:31:13.505 13:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:31:14.072 13:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:31:15.004 13:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:31:15.004 13:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:15.004 13:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:31:15.004 13:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:31:15.004 13:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=104592 00:31:15.004 13:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:31:15.004 13:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:31:15.004 [global] 00:31:15.004 thread=1 00:31:15.004 invalidate=1 00:31:15.004 rw=randrw 00:31:15.004 time_based=1 00:31:15.004 runtime=6 00:31:15.004 ioengine=libaio 00:31:15.004 direct=1 00:31:15.004 bs=4096 00:31:15.004 iodepth=128 00:31:15.004 norandommap=0 00:31:15.004 numjobs=1 00:31:15.004 00:31:15.004 verify_dump=1 00:31:15.004 verify_backlog=512 00:31:15.004 verify_state_save=0 00:31:15.004 do_verify=1 00:31:15.004 verify=crc32c-intel 00:31:15.004 [job0] 00:31:15.004 filename=/dev/nvme0n1 00:31:15.004 Could not set queue depth (nvme0n1) 00:31:15.004 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:15.004 fio-3.35 00:31:15.004 Starting 1 thread 00:31:15.938 13:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:31:16.503 13:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:31:16.761 13:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:31:17.695 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:31:17.695 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:17.695 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:31:17.695 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:31:18.261 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:18.520 13:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:31:19.451 13:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:31:19.451 13:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:19.451 13:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:19.451 13:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 104592 00:31:21.350 00:31:21.350 job0: (groupid=0, jobs=1): err= 0: pid=104613: Fri Dec 6 13:25:27 2024 00:31:21.350 read: IOPS=10.9k, BW=42.4MiB/s (44.4MB/s)(254MiB/6003msec) 00:31:21.350 slat (usec): min=3, max=9111, avg=44.00, stdev=248.99 00:31:21.350 clat (usec): min=168, max=53633, avg=7931.89, stdev=3040.00 00:31:21.350 lat (usec): min=313, max=53644, avg=7975.89, stdev=3056.86 00:31:21.350 clat percentiles (usec): 00:31:21.350 | 1.00th=[ 1860], 5.00th=[ 3621], 10.00th=[ 4555], 20.00th=[ 5735], 00:31:21.350 | 30.00th=[ 6718], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8455], 00:31:21.350 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[11076], 95.00th=[12125], 00:31:21.350 | 99.00th=[14877], 99.50th=[16581], 99.90th=[49021], 99.95th=[51643], 00:31:21.350 | 99.99th=[53216] 00:31:21.350 bw ( KiB/s): min= 1088, max=37984, per=55.00%, avg=23871.27, stdev=11480.83, samples=11 00:31:21.350 iops : min= 272, max= 9496, avg=5967.82, stdev=2870.21, samples=11 00:31:21.350 write: IOPS=7018, BW=27.4MiB/s (28.7MB/s)(141MiB/5140msec); 0 zone resets 00:31:21.350 slat (usec): min=5, max=5806, avg=59.10, stdev=119.38 00:31:21.350 clat (usec): min=291, max=52355, avg=6796.02, stdev=2842.25 00:31:21.350 lat (usec): min=348, max=52399, avg=6855.13, stdev=2852.29 00:31:21.350 clat percentiles (usec): 00:31:21.350 | 1.00th=[ 1483], 5.00th=[ 2933], 10.00th=[ 3654], 20.00th=[ 4555], 00:31:21.350 | 30.00th=[ 5276], 40.00th=[ 6259], 50.00th=[ 7111], 60.00th=[ 7570], 00:31:21.350 | 70.00th=[ 7963], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[10552], 00:31:21.350 | 99.00th=[12387], 99.50th=[13698], 99.90th=[49546], 99.95th=[51119], 00:31:21.350 | 99.99th=[52167] 00:31:21.350 bw ( KiB/s): min= 1120, max=37488, per=85.13%, avg=23899.64, stdev=11388.24, samples=11 00:31:21.350 iops : min= 280, max= 9372, avg=5974.91, stdev=2847.06, samples=11 00:31:21.350 lat (usec) : 250=0.01%, 500=0.03%, 750=0.08%, 1000=0.15% 00:31:21.350 lat (msec) : 2=1.30%, 4=7.45%, 10=76.79%, 20=14.02%, 50=0.09% 00:31:21.350 lat (msec) : 100=0.09% 00:31:21.350 cpu : usr=6.46%, sys=30.76%, ctx=9550, majf=0, minf=127 00:31:21.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:21.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:21.350 issued rwts: total=65141,36076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:21.350 00:31:21.350 Run status group 0 (all jobs): 00:31:21.350 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=254MiB (267MB), run=6003-6003msec 00:31:21.350 WRITE: bw=27.4MiB/s (28.7MB/s), 27.4MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=141MiB (148MB), run=5140-5140msec 00:31:21.350 00:31:21.350 Disk stats (read/write): 00:31:21.350 nvme0n1: ios=64218/35554, merge=0/0, ticks=463523/217584, in_queue=681107, util=98.65% 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:21.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:31:21.350 13:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.608 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.608 rmmod nvme_tcp 00:31:21.608 rmmod nvme_fabrics 00:31:21.608 rmmod nvme_keyring 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 104295 ']' 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 104295 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 104295 ']' 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 104295 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104295 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:21.866 killing process with pid 104295 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104295' 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 104295 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 104295 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:21.866 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:31:22.124 ************************************ 00:31:22.124 END TEST nvmf_target_multipath 00:31:22.124 ************************************ 00:31:22.124 00:31:22.124 real 0m21.360s 00:31:22.124 user 1m11.999s 00:31:22.124 sys 0m10.669s 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.124 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:22.383 ************************************ 00:31:22.383 START TEST nvmf_zcopy 00:31:22.383 ************************************ 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:22.383 * Looking for test storage... 00:31:22.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.383 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:22.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.384 --rc genhtml_branch_coverage=1 00:31:22.384 --rc genhtml_function_coverage=1 00:31:22.384 --rc genhtml_legend=1 00:31:22.384 --rc geninfo_all_blocks=1 00:31:22.384 --rc geninfo_unexecuted_blocks=1 00:31:22.384 00:31:22.384 ' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:22.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.384 --rc genhtml_branch_coverage=1 00:31:22.384 --rc genhtml_function_coverage=1 00:31:22.384 --rc genhtml_legend=1 00:31:22.384 --rc geninfo_all_blocks=1 00:31:22.384 --rc geninfo_unexecuted_blocks=1 00:31:22.384 00:31:22.384 ' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:22.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.384 --rc genhtml_branch_coverage=1 00:31:22.384 --rc genhtml_function_coverage=1 00:31:22.384 --rc genhtml_legend=1 00:31:22.384 --rc geninfo_all_blocks=1 00:31:22.384 --rc geninfo_unexecuted_blocks=1 00:31:22.384 00:31:22.384 ' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:22.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.384 --rc genhtml_branch_coverage=1 00:31:22.384 --rc genhtml_function_coverage=1 00:31:22.384 --rc genhtml_legend=1 00:31:22.384 --rc geninfo_all_blocks=1 00:31:22.384 --rc geninfo_unexecuted_blocks=1 00:31:22.384 00:31:22.384 ' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:22.384 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:22.385 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:22.385 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:22.642 Cannot find device "nvmf_init_br" 00:31:22.642 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:31:22.642 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:22.642 Cannot find device "nvmf_init_br2" 00:31:22.642 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:31:22.642 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:22.642 Cannot find device "nvmf_tgt_br" 00:31:22.642 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:31:22.642 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:22.642 Cannot find device "nvmf_tgt_br2" 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:22.643 Cannot find device "nvmf_init_br" 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:22.643 Cannot find device "nvmf_init_br2" 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:22.643 Cannot find device "nvmf_tgt_br" 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:22.643 Cannot find device "nvmf_tgt_br2" 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:31:22.643 13:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:22.643 Cannot find device "nvmf_br" 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:22.643 Cannot find device "nvmf_init_if" 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:22.643 Cannot find device "nvmf_init_if2" 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:22.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:22.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:22.643 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:22.901 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:22.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:22.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:31:22.902 00:31:22.902 --- 10.0.0.3 ping statistics --- 00:31:22.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.902 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:22.902 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:22.902 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:31:22.902 00:31:22.902 --- 10.0.0.4 ping statistics --- 00:31:22.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.902 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:22.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:31:22.902 00:31:22.902 --- 10.0.0.1 ping statistics --- 00:31:22.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.902 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:22.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:31:22.902 00:31:22.902 --- 10.0.0.2 ping statistics --- 00:31:22.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.902 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104969 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104969 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104969 ']' 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.902 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.902 [2024-12-06 13:25:29.396900] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.902 [2024-12-06 13:25:29.398383] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:31:22.902 [2024-12-06 13:25:29.398469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.160 [2024-12-06 13:25:29.553600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.160 [2024-12-06 13:25:29.606189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.160 [2024-12-06 13:25:29.606275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.160 [2024-12-06 13:25:29.606296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.160 [2024-12-06 13:25:29.606311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.160 [2024-12-06 13:25:29.606325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.160 [2024-12-06 13:25:29.606737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.160 [2024-12-06 13:25:29.671444] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.160 [2024-12-06 13:25:29.671817] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 [2024-12-06 13:25:29.747680] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 [2024-12-06 13:25:29.775780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 malloc0 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.419 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.419 { 00:31:23.419 "params": { 00:31:23.419 "name": "Nvme$subsystem", 00:31:23.420 "trtype": "$TEST_TRANSPORT", 00:31:23.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.420 "adrfam": "ipv4", 00:31:23.420 "trsvcid": "$NVMF_PORT", 00:31:23.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.420 "hdgst": ${hdgst:-false}, 00:31:23.420 "ddgst": ${ddgst:-false} 00:31:23.420 }, 00:31:23.420 "method": "bdev_nvme_attach_controller" 00:31:23.420 } 00:31:23.420 EOF 00:31:23.420 )") 00:31:23.420 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:23.420 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:23.420 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:23.420 13:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.420 "params": { 00:31:23.420 "name": "Nvme1", 00:31:23.420 "trtype": "tcp", 00:31:23.420 "traddr": "10.0.0.3", 00:31:23.420 "adrfam": "ipv4", 00:31:23.420 "trsvcid": "4420", 00:31:23.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.420 "hdgst": false, 00:31:23.420 "ddgst": false 00:31:23.420 }, 00:31:23.420 "method": "bdev_nvme_attach_controller" 00:31:23.420 }' 00:31:23.420 [2024-12-06 13:25:29.870244] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:31:23.420 [2024-12-06 13:25:29.870339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105007 ] 00:31:23.678 [2024-12-06 13:25:30.017725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.678 [2024-12-06 13:25:30.061420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.678 Running I/O for 10 seconds... 00:31:26.011 4917.00 IOPS, 38.41 MiB/s [2024-12-06T13:25:33.474Z] 5209.50 IOPS, 40.70 MiB/s [2024-12-06T13:25:34.409Z] 5404.00 IOPS, 42.22 MiB/s [2024-12-06T13:25:35.346Z] 5503.50 IOPS, 43.00 MiB/s [2024-12-06T13:25:36.288Z] 5559.00 IOPS, 43.43 MiB/s [2024-12-06T13:25:37.222Z] 5596.67 IOPS, 43.72 MiB/s [2024-12-06T13:25:38.597Z] 5624.29 IOPS, 43.94 MiB/s [2024-12-06T13:25:39.551Z] 5639.75 IOPS, 44.06 MiB/s [2024-12-06T13:25:40.540Z] 5658.67 IOPS, 44.21 MiB/s [2024-12-06T13:25:40.540Z] 5667.30 IOPS, 44.28 MiB/s 00:31:34.012 Latency(us) 00:31:34.012 [2024-12-06T13:25:40.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:34.012 Verification LBA range: start 0x0 length 0x1000 00:31:34.012 Nvme1n1 : 10.02 5671.38 44.31 0.00 0.00 22495.85 2919.33 37176.79 00:31:34.012 [2024-12-06T13:25:40.540Z] =================================================================================================================== 00:31:34.012 [2024-12-06T13:25:40.540Z] Total : 5671.38 44.31 0.00 0.00 22495.85 2919.33 37176.79 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=105114 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.012 { 00:31:34.012 "params": { 00:31:34.012 "name": "Nvme$subsystem", 00:31:34.012 "trtype": "$TEST_TRANSPORT", 00:31:34.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.012 "adrfam": "ipv4", 00:31:34.012 "trsvcid": "$NVMF_PORT", 00:31:34.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.012 "hdgst": ${hdgst:-false}, 00:31:34.012 "ddgst": ${ddgst:-false} 00:31:34.012 }, 00:31:34.012 "method": "bdev_nvme_attach_controller" 00:31:34.012 } 00:31:34.012 EOF 00:31:34.012 )") 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:34.012 [2024-12-06 13:25:40.371509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.012 [2024-12-06 13:25:40.371566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:34.012 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:34.012 13:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:34.012 "params": { 00:31:34.012 "name": "Nvme1", 00:31:34.012 "trtype": "tcp", 00:31:34.012 "traddr": "10.0.0.3", 00:31:34.012 "adrfam": "ipv4", 00:31:34.012 "trsvcid": "4420", 00:31:34.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:34.013 "hdgst": false, 00:31:34.013 "ddgst": false 00:31:34.013 }, 00:31:34.013 "method": "bdev_nvme_attach_controller" 00:31:34.013 }' 00:31:34.013 [2024-12-06 13:25:40.379447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.379494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.391448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.391495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.399437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.399482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.407438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.407485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.415440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.415487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.427447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.427494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.435496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.435542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 [2024-12-06 13:25:40.438273] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:31:34.013 [2024-12-06 13:25:40.438379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105114 ] 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.443448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.443496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.451449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.451492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.459454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.459502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.471417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.471454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.483407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.483439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.495411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.495440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.507411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.507439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.519401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.519428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.013 [2024-12-06 13:25:40.531414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.013 [2024-12-06 13:25:40.531446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.013 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.271 [2024-12-06 13:25:40.543400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-12-06 13:25:40.543439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.555402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.555430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.567410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.567442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.579401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.579430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.591400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.591428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 [2024-12-06 13:25:40.591654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.603427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.603472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.615401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.615429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.624884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.272 [2024-12-06 13:25:40.627401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.627428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.639427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.639464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.651439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.651478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.663424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.663460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.675439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.675479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.687406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.687436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.699419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.699454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.711466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.711516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.723436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.723472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.735422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.735459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.747417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.747453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.759424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.759458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 Running I/O for 5 seconds... 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.776881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.776921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.272 [2024-12-06 13:25:40.795409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.272 [2024-12-06 13:25:40.795447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.272 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.805592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.805624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.820525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.820561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.838647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.838684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.848758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.848795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.864541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.864580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.883486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.883525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.893171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.893206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.530 [2024-12-06 13:25:40.907905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-12-06 13:25:40.907955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:40.928289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:40.928326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:40.944163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:40.944199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:40.964343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:40.964384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:40.982110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:40.982148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:40.994567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:40.994606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:41.004423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:41.004461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:41.020741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:41.020778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.531 [2024-12-06 13:25:41.038935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-12-06 13:25:41.038984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.060222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.060260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.078019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.078055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.092631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.092671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.112226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.112265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.132500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.132540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.148910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.148960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.167381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.167419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.178022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.178056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.789 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.789 [2024-12-06 13:25:41.191520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.789 [2024-12-06 13:25:41.191557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.790 [2024-12-06 13:25:41.201191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-12-06 13:25:41.201227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.790 [2024-12-06 13:25:41.216672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-12-06 13:25:41.216709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.790 [2024-12-06 13:25:41.235114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-12-06 13:25:41.235151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.790 [2024-12-06 13:25:41.256350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-12-06 13:25:41.256388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.790 [2024-12-06 13:25:41.272580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-12-06 13:25:41.272618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.790 [2024-12-06 13:25:41.291713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-12-06 13:25:41.291755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:34.790 [2024-12-06 13:25:41.302244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-12-06 13:25:41.302280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.048 [2024-12-06 13:25:41.323271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.048 [2024-12-06 13:25:41.323320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.048 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.048 [2024-12-06 13:25:41.332811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.048 [2024-12-06 13:25:41.332848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.349125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.349160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.367858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.367895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.386476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.386513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.408090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.408129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.424401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.424438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.443739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.443777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.454485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.454553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.475225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.475334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.485954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.486031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.505500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.505580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.530025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.530130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.049 [2024-12-06 13:25:41.553208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.049 [2024-12-06 13:25:41.553263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.049 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.576336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.308 [2024-12-06 13:25:41.576400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.308 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.591885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.308 [2024-12-06 13:25:41.591974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.308 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.612865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.308 [2024-12-06 13:25:41.612976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.308 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.629985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.308 [2024-12-06 13:25:41.630071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.308 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.644832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.308 [2024-12-06 13:25:41.644872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.308 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.663857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.308 [2024-12-06 13:25:41.663926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.308 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.683701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.308 [2024-12-06 13:25:41.683741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.308 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.308 [2024-12-06 13:25:41.693614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.693652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 [2024-12-06 13:25:41.709513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.709581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 [2024-12-06 13:25:41.725900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.725977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 [2024-12-06 13:25:41.742323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.742371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 [2024-12-06 13:25:41.757179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.757216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 11190.00 IOPS, 87.42 MiB/s [2024-12-06T13:25:41.837Z] [2024-12-06 13:25:41.775622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.775692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 [2024-12-06 13:25:41.785778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.785848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 [2024-12-06 13:25:41.800968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.801052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.309 [2024-12-06 13:25:41.817901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.309 [2024-12-06 13:25:41.817977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.309 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.840522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.840587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.855277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.855319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.865334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.865390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.881390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.881452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.899573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.899621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.910152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.910190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.924660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.924724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.568 [2024-12-06 13:25:41.943523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.568 [2024-12-06 13:25:41.943574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.568 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:41.953530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:41.953567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:41.968134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:41.968202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:41.988247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:41.988304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:42.004852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:42.004897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:42.021161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:42.021229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:42.038229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:42.038282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:42.053265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:42.053302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:42.069563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:42.069624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.569 [2024-12-06 13:25:42.086018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.569 [2024-12-06 13:25:42.086086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.569 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.107891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.107935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.128502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.128550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.146057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.146096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.160853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.160891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.179317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.179353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.189537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.189574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.205165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.205216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.221585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.221622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.238138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.238175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.248416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.248452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.264585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.264622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.828 [2024-12-06 13:25:42.283923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.828 [2024-12-06 13:25:42.283973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.828 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.829 [2024-12-06 13:25:42.303811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.829 [2024-12-06 13:25:42.303849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.829 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.829 [2024-12-06 13:25:42.322207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.829 [2024-12-06 13:25:42.322244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.829 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:35.829 [2024-12-06 13:25:42.337379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.829 [2024-12-06 13:25:42.337419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.829 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.355513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.355550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.365696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.365781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.380389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.380433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.399457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.399501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.409774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.409816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.426236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.426280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.448972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.449023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.462499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.462552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.472886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.472926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.489389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.489434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.504882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.504928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.523196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.523243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.544792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.544853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.560285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.560338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.579323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.579376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.589590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.589639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.605236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.605278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.112 [2024-12-06 13:25:42.621884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.112 [2024-12-06 13:25:42.621928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.112 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.638378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.638423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.648470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.648510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.665240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.665284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.683798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.683845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.704914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.704974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.719776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.719819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.740254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.740303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.757237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.757283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 11168.50 IOPS, 87.25 MiB/s [2024-12-06T13:25:42.899Z] [2024-12-06 13:25:42.775187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.775233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.795912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.795971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.811594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.811640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.821361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.821408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.836521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.836564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.855729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.855778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.866699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.866740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.371 [2024-12-06 13:25:42.888048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.371 [2024-12-06 13:25:42.888099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.371 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:42.904071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:42.904114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:42.923608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:42.923658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:42.933702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:42.933740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:42.949489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:42.949531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:42.965414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:42.965460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:42.981805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:42.981848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:42.999404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:42.999447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.009214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.009263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.025247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.025293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.041576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.041621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.057665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.057712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.073825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.073869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.089442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.089487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.107484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.107527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.117437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.117477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.132124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.132167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.631 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.631 [2024-12-06 13:25:43.152605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.631 [2024-12-06 13:25:43.152655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.169479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.169525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.185524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.185567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.201660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.201703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.217173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.217216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.234771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.234819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.256622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.256674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.272709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.272754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.290975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.291019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.311549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.311595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.322175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.322215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.339726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.890 [2024-12-06 13:25:43.339773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.890 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.890 [2024-12-06 13:25:43.349551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.891 [2024-12-06 13:25:43.349595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.891 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.891 [2024-12-06 13:25:43.367993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.891 [2024-12-06 13:25:43.368043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.891 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.891 [2024-12-06 13:25:43.387872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.891 [2024-12-06 13:25:43.387926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.891 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:36.891 [2024-12-06 13:25:43.405847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.891 [2024-12-06 13:25:43.405893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.891 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.423393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.423439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.434406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.434445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.447626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.447680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.467976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.468022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.486411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.486460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.508034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.508085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.522972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.523014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.532859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.532899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.547420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.547461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.557260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.557297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.571667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.571708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.581716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.581757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.596384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.596427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.615604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.615652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.625437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.625486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.639929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.639985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.151 [2024-12-06 13:25:43.660194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.151 [2024-12-06 13:25:43.660244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.151 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.680687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.680738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.697542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.697587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.713846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.713895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.728802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.728848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.745775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.745822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.761567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.761612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 11265.00 IOPS, 88.01 MiB/s [2024-12-06T13:25:43.939Z] 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.776884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.776928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.794805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.794852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.816390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.816438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.831799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.831988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.851596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.851796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.411 [2024-12-06 13:25:43.861759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.411 [2024-12-06 13:25:43.861796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.411 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.412 [2024-12-06 13:25:43.875570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.412 [2024-12-06 13:25:43.875613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.412 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.412 [2024-12-06 13:25:43.885380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.412 [2024-12-06 13:25:43.885419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.412 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.412 [2024-12-06 13:25:43.900234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.412 [2024-12-06 13:25:43.900275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.412 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.412 [2024-12-06 13:25:43.920454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.412 [2024-12-06 13:25:43.920499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.412 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.670 [2024-12-06 13:25:43.938399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.670 [2024-12-06 13:25:43.938609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.670 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.670 [2024-12-06 13:25:43.960229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.670 [2024-12-06 13:25:43.960280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.670 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.670 [2024-12-06 13:25:43.976455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.670 [2024-12-06 13:25:43.976499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.670 2024/12/06 13:25:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.670 [2024-12-06 13:25:43.996068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.670 [2024-12-06 13:25:43.996114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.015272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.015333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.026286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.026331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.039388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.039442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.050034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.050078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.065585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.065758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.081319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.081361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.099034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.099080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.120078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.120126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.136592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.136635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.155006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.155054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.165326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.165365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.671 [2024-12-06 13:25:44.180896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.671 [2024-12-06 13:25:44.180959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.671 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.199123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.199168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.219639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.219685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.231044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.231085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.242115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.242154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.256597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.256639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.275366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.275413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.285359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.285520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.299573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.299614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.929 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.929 [2024-12-06 13:25:44.309691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.929 [2024-12-06 13:25:44.309731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.325807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.325854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.341196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.341238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.357972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.358014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.372903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.373101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.391243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.391303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.402011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.402049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.415885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.415928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.436530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.436725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.930 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:37.930 [2024-12-06 13:25:44.453321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.930 [2024-12-06 13:25:44.453380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.188 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.469508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.469553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.487768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.487962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.506435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.506480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.516962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.517000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.532256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.532304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.551110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.551154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.572140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.572185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.589746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.589790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.605377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.605420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.621758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.621801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.637553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.637596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.653333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.653376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.669202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.669241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.685383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.685425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.189 [2024-12-06 13:25:44.701356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.189 [2024-12-06 13:25:44.701397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.189 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.717281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.717322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.735557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.735605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.745910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.745961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.759903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.759954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 11290.25 IOPS, 88.21 MiB/s [2024-12-06T13:25:44.975Z] 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.780769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.780817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.797250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.797294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.815419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.815463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.825475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.825518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.840115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.840155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.859341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.859389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.869842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.869882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.447 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.447 [2024-12-06 13:25:44.884297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.447 [2024-12-06 13:25:44.884340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.448 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.448 [2024-12-06 13:25:44.903800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.448 [2024-12-06 13:25:44.903850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.448 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.448 [2024-12-06 13:25:44.923009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.448 [2024-12-06 13:25:44.923059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.448 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.448 [2024-12-06 13:25:44.943747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.448 [2024-12-06 13:25:44.943808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.448 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.448 [2024-12-06 13:25:44.954502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.448 [2024-12-06 13:25:44.954542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.448 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.448 [2024-12-06 13:25:44.968500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.448 [2024-12-06 13:25:44.968543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.448 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:44.988175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:44.988232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.008188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.008242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.027678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.027728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.038075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.038120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.055689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.055740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.065554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.065611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.081255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.081304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.097567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.097617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.113796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.113862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.127992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.128050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.147683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.147744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.158399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.158438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.175936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.175994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.195163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.195215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.706 [2024-12-06 13:25:45.216238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.706 [2024-12-06 13:25:45.216289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.706 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.964 [2024-12-06 13:25:45.234563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.964 [2024-12-06 13:25:45.234608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.964 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.964 [2024-12-06 13:25:45.245016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.964 [2024-12-06 13:25:45.245056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.964 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.964 [2024-12-06 13:25:45.261189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.964 [2024-12-06 13:25:45.261233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.964 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.964 [2024-12-06 13:25:45.279691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.964 [2024-12-06 13:25:45.279735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.964 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.964 [2024-12-06 13:25:45.289617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.964 [2024-12-06 13:25:45.289656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.964 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.964 [2024-12-06 13:25:45.304665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.964 [2024-12-06 13:25:45.304708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.964 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.323106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.323149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.344362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.344411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.361003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.361047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.379479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.379527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.389551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.389589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.405244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.405287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.423254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.423314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.433288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.433327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.448435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.448479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.467744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.467798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:38.965 [2024-12-06 13:25:45.478085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.965 [2024-12-06 13:25:45.478121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.965 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.491713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.491757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.512438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.512490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.528686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.528732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.547060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.547114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.567995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.568046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.586816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.586863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.607746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.607793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.618634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.618674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.632511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.632554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.651889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.651937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.671913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.671973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.691896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.691953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.712006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.712051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.222 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.222 [2024-12-06 13:25:45.729501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.222 [2024-12-06 13:25:45.729546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.223 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.223 [2024-12-06 13:25:45.745379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.223 [2024-12-06 13:25:45.745423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.480 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.480 [2024-12-06 13:25:45.761231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.480 [2024-12-06 13:25:45.761292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.480 11304.80 IOPS, 88.32 MiB/s [2024-12-06T13:25:46.008Z] 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.480 [2024-12-06 13:25:45.771439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.480 [2024-12-06 13:25:45.771475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.480 00:31:39.480 Latency(us) 00:31:39.480 [2024-12-06T13:25:46.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.480 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:39.480 Nvme1n1 : 5.01 11308.43 88.35 0.00 0.00 11306.04 2591.65 19541.64 00:31:39.480 [2024-12-06T13:25:46.008Z] =================================================================================================================== 00:31:39.480 [2024-12-06T13:25:46.008Z] Total : 11308.43 88.35 0.00 0.00 11306.04 2591.65 19541.64 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.783433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.783471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.795468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.795519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.807451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.807495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.819454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.819496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.831457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.831526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.843446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.843490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.855438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.855480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.867440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.867476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.879454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.879497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.891437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.891479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.903410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.903443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 [2024-12-06 13:25:45.915417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.481 [2024-12-06 13:25:45.915450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.481 2024/12/06 13:25:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:39.481 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (105114) - No such process 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 105114 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.481 delay0 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.481 13:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:31:39.738 [2024-12-06 13:25:46.124655] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:47.880 Initializing NVMe Controllers 00:31:47.880 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:47.880 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:47.880 Initialization complete. Launching workers. 00:31:47.880 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 219, failed: 27406 00:31:47.880 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27474, failed to submit 151 00:31:47.880 success 27418, unsuccessful 56, failed 0 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.880 rmmod nvme_tcp 00:31:47.880 rmmod nvme_fabrics 00:31:47.880 rmmod nvme_keyring 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104969 ']' 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104969 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104969 ']' 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104969 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104969 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104969' 00:31:47.880 killing process with pid 104969 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104969 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104969 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.880 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:31:47.881 00:31:47.881 real 0m25.022s 00:31:47.881 user 0m38.877s 00:31:47.881 sys 0m7.968s 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:47.881 ************************************ 00:31:47.881 END TEST nvmf_zcopy 00:31:47.881 ************************************ 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.881 ************************************ 00:31:47.881 START TEST nvmf_nmic 00:31:47.881 ************************************ 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:47.881 * Looking for test storage... 00:31:47.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.881 --rc genhtml_branch_coverage=1 00:31:47.881 --rc genhtml_function_coverage=1 00:31:47.881 --rc genhtml_legend=1 00:31:47.881 --rc geninfo_all_blocks=1 00:31:47.881 --rc geninfo_unexecuted_blocks=1 00:31:47.881 00:31:47.881 ' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.881 --rc genhtml_branch_coverage=1 00:31:47.881 --rc genhtml_function_coverage=1 00:31:47.881 --rc genhtml_legend=1 00:31:47.881 --rc geninfo_all_blocks=1 00:31:47.881 --rc geninfo_unexecuted_blocks=1 00:31:47.881 00:31:47.881 ' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.881 --rc genhtml_branch_coverage=1 00:31:47.881 --rc genhtml_function_coverage=1 00:31:47.881 --rc genhtml_legend=1 00:31:47.881 --rc geninfo_all_blocks=1 00:31:47.881 --rc geninfo_unexecuted_blocks=1 00:31:47.881 00:31:47.881 ' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.881 --rc genhtml_branch_coverage=1 00:31:47.881 --rc genhtml_function_coverage=1 00:31:47.881 --rc genhtml_legend=1 00:31:47.881 --rc geninfo_all_blocks=1 00:31:47.881 --rc geninfo_unexecuted_blocks=1 00:31:47.881 00:31:47.881 ' 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.881 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:47.882 13:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:47.882 Cannot find device "nvmf_init_br" 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:47.882 Cannot find device "nvmf_init_br2" 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:47.882 Cannot find device "nvmf_tgt_br" 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:47.882 Cannot find device "nvmf_tgt_br2" 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:47.882 Cannot find device "nvmf_init_br" 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:47.882 Cannot find device "nvmf_init_br2" 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:31:47.882 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:47.882 Cannot find device "nvmf_tgt_br" 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:47.883 Cannot find device "nvmf_tgt_br2" 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:47.883 Cannot find device "nvmf_br" 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:47.883 Cannot find device "nvmf_init_if" 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:47.883 Cannot find device "nvmf_init_if2" 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:47.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:47.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:47.883 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:48.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:48.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:31:48.142 00:31:48.142 --- 10.0.0.3 ping statistics --- 00:31:48.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.142 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:48.142 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:48.142 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:31:48.142 00:31:48.142 --- 10.0.0.4 ping statistics --- 00:31:48.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.142 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:48.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:48.142 00:31:48.142 --- 10.0.0.1 ping statistics --- 00:31:48.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.142 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:48.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:31:48.142 00:31:48.142 --- 10.0.0.2 ping statistics --- 00:31:48.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.142 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=105491 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 105491 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 105491 ']' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.142 13:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:48.142 [2024-12-06 13:25:54.525623] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:48.142 [2024-12-06 13:25:54.527046] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:31:48.142 [2024-12-06 13:25:54.527147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.401 [2024-12-06 13:25:54.682032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:48.401 [2024-12-06 13:25:54.723513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.401 [2024-12-06 13:25:54.723577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.401 [2024-12-06 13:25:54.723591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.401 [2024-12-06 13:25:54.723601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.401 [2024-12-06 13:25:54.723610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.401 [2024-12-06 13:25:54.724503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.401 [2024-12-06 13:25:54.724604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.401 [2024-12-06 13:25:54.724766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.401 [2024-12-06 13:25:54.724776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.401 [2024-12-06 13:25:54.782480] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:48.401 [2024-12-06 13:25:54.782939] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:48.401 [2024-12-06 13:25:54.783236] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:48.401 [2024-12-06 13:25:54.783552] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:48.401 [2024-12-06 13:25:54.783598] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.335 [2024-12-06 13:25:55.605800] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.335 Malloc0 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.335 [2024-12-06 13:25:55.670056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.335 test case1: single bdev can't be used in multiple subsystems 00:31:49.335 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.336 [2024-12-06 13:25:55.697800] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:49.336 [2024-12-06 13:25:55.697865] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:49.336 [2024-12-06 13:25:55.697879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.336 2024/12/06 13:25:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:49.336 request: 00:31:49.336 { 00:31:49.336 "method": "nvmf_subsystem_add_ns", 00:31:49.336 "params": { 00:31:49.336 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:49.336 "namespace": { 00:31:49.336 "bdev_name": "Malloc0", 00:31:49.336 "no_auto_visible": false, 00:31:49.336 "hide_metadata": false 00:31:49.336 } 00:31:49.336 } 00:31:49.336 } 00:31:49.336 Got JSON-RPC error response 00:31:49.336 GoRPCClient: error on JSON-RPC call 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:49.336 Adding namespace failed - expected result. 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:49.336 test case2: host connect to nvmf target in multiple paths 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.336 [2024-12-06 13:25:55.709933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:31:49.336 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:31:49.594 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:49.594 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:49.594 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:49.594 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:49.594 13:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:51.494 13:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:51.494 13:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:51.494 13:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:51.494 13:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:51.494 13:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:51.494 13:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:51.494 13:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:51.494 [global] 00:31:51.494 thread=1 00:31:51.494 invalidate=1 00:31:51.494 rw=write 00:31:51.494 time_based=1 00:31:51.494 runtime=1 00:31:51.494 ioengine=libaio 00:31:51.494 direct=1 00:31:51.494 bs=4096 00:31:51.494 iodepth=1 00:31:51.494 norandommap=0 00:31:51.494 numjobs=1 00:31:51.494 00:31:51.494 verify_dump=1 00:31:51.494 verify_backlog=512 00:31:51.494 verify_state_save=0 00:31:51.494 do_verify=1 00:31:51.494 verify=crc32c-intel 00:31:51.494 [job0] 00:31:51.494 filename=/dev/nvme0n1 00:31:51.494 Could not set queue depth (nvme0n1) 00:31:51.752 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.752 fio-3.35 00:31:51.752 Starting 1 thread 00:31:52.687 00:31:52.687 job0: (groupid=0, jobs=1): err= 0: pid=105593: Fri Dec 6 13:25:59 2024 00:31:52.687 read: IOPS=2796, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec) 00:31:52.687 slat (nsec): min=14334, max=47270, avg=16320.32, stdev=2684.68 00:31:52.687 clat (usec): min=159, max=346, avg=174.97, stdev=11.37 00:31:52.687 lat (usec): min=174, max=363, avg=191.29, stdev=11.66 00:31:52.687 clat percentiles (usec): 00:31:52.687 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:31:52.687 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 174], 00:31:52.687 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 190], 00:31:52.687 | 99.00th=[ 212], 99.50th=[ 253], 99.90th=[ 297], 99.95th=[ 330], 00:31:52.688 | 99.99th=[ 347] 00:31:52.688 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:31:52.688 slat (usec): min=20, max=111, avg=25.37, stdev= 7.47 00:31:52.688 clat (usec): min=104, max=373, avg=122.31, stdev=12.12 00:31:52.688 lat (usec): min=130, max=429, avg=147.68, stdev=15.78 00:31:52.688 clat percentiles (usec): 00:31:52.688 | 1.00th=[ 113], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 117], 00:31:52.688 | 30.00th=[ 118], 40.00th=[ 119], 50.00th=[ 120], 60.00th=[ 122], 00:31:52.688 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 137], 00:31:52.688 | 99.00th=[ 163], 99.50th=[ 182], 99.90th=[ 289], 99.95th=[ 318], 00:31:52.688 | 99.99th=[ 375] 00:31:52.688 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:52.688 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:52.688 lat (usec) : 250=99.63%, 500=0.37% 00:31:52.688 cpu : usr=2.40%, sys=9.20%, ctx=5878, majf=0, minf=5 00:31:52.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.688 issued rwts: total=2799,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:52.688 00:31:52.688 Run status group 0 (all jobs): 00:31:52.688 READ: bw=10.9MiB/s (11.5MB/s), 10.9MiB/s-10.9MiB/s (11.5MB/s-11.5MB/s), io=10.9MiB (11.5MB), run=1001-1001msec 00:31:52.688 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:31:52.688 00:31:52.688 Disk stats (read/write): 00:31:52.688 nvme0n1: ios=2610/2686, merge=0/0, ticks=478/341, in_queue=819, util=91.38% 00:31:52.688 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:52.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.947 rmmod nvme_tcp 00:31:52.947 rmmod nvme_fabrics 00:31:52.947 rmmod nvme_keyring 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 105491 ']' 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 105491 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 105491 ']' 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 105491 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105491 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.947 killing process with pid 105491 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105491' 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 105491 00:31:52.947 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 105491 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:53.205 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:31:53.465 00:31:53.465 real 0m6.115s 00:31:53.465 user 0m14.740s 00:31:53.465 sys 0m2.166s 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.465 ************************************ 00:31:53.465 END TEST nvmf_nmic 00:31:53.465 ************************************ 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:53.465 ************************************ 00:31:53.465 START TEST nvmf_fio_target 00:31:53.465 ************************************ 00:31:53.465 13:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:53.725 * Looking for test storage... 00:31:53.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:53.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.725 --rc genhtml_branch_coverage=1 00:31:53.725 --rc genhtml_function_coverage=1 00:31:53.725 --rc genhtml_legend=1 00:31:53.725 --rc geninfo_all_blocks=1 00:31:53.725 --rc geninfo_unexecuted_blocks=1 00:31:53.725 00:31:53.725 ' 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:53.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.725 --rc genhtml_branch_coverage=1 00:31:53.725 --rc genhtml_function_coverage=1 00:31:53.725 --rc genhtml_legend=1 00:31:53.725 --rc geninfo_all_blocks=1 00:31:53.725 --rc geninfo_unexecuted_blocks=1 00:31:53.725 00:31:53.725 ' 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:53.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.725 --rc genhtml_branch_coverage=1 00:31:53.725 --rc genhtml_function_coverage=1 00:31:53.725 --rc genhtml_legend=1 00:31:53.725 --rc geninfo_all_blocks=1 00:31:53.725 --rc geninfo_unexecuted_blocks=1 00:31:53.725 00:31:53.725 ' 00:31:53.725 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:53.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.725 --rc genhtml_branch_coverage=1 00:31:53.725 --rc genhtml_function_coverage=1 00:31:53.725 --rc genhtml_legend=1 00:31:53.725 --rc geninfo_all_blocks=1 00:31:53.725 --rc geninfo_unexecuted_blocks=1 00:31:53.725 00:31:53.725 ' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:53.726 Cannot find device "nvmf_init_br" 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:53.726 Cannot find device "nvmf_init_br2" 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:53.726 Cannot find device "nvmf_tgt_br" 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:53.726 Cannot find device "nvmf_tgt_br2" 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:31:53.726 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:53.985 Cannot find device "nvmf_init_br" 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:53.985 Cannot find device "nvmf_init_br2" 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:53.985 Cannot find device "nvmf_tgt_br" 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:53.985 Cannot find device "nvmf_tgt_br2" 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:53.985 Cannot find device "nvmf_br" 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:53.985 Cannot find device "nvmf_init_if" 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:53.985 Cannot find device "nvmf_init_if2" 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:53.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:53.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:53.985 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:53.986 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:53.986 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:54.245 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:54.245 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:31:54.245 00:31:54.245 --- 10.0.0.3 ping statistics --- 00:31:54.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.245 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:54.245 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:54.245 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:31:54.245 00:31:54.245 --- 10.0.0.4 ping statistics --- 00:31:54.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.245 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:54.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:31:54.245 00:31:54.245 --- 10.0.0.1 ping statistics --- 00:31:54.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.245 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:54.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:31:54.245 00:31:54.245 --- 10.0.0.2 ping statistics --- 00:31:54.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.245 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=105828 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 105828 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105828 ']' 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.245 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 [2024-12-06 13:26:00.674069] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.245 [2024-12-06 13:26:00.675400] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:31:54.245 [2024-12-06 13:26:00.675465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.505 [2024-12-06 13:26:00.827128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.505 [2024-12-06 13:26:00.886498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.505 [2024-12-06 13:26:00.886575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.505 [2024-12-06 13:26:00.886596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.505 [2024-12-06 13:26:00.886610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.505 [2024-12-06 13:26:00.886622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.505 [2024-12-06 13:26:00.887691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.505 [2024-12-06 13:26:00.887897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.505 [2024-12-06 13:26:00.888471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.505 [2024-12-06 13:26:00.888495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.505 [2024-12-06 13:26:00.951334] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.505 [2024-12-06 13:26:00.951656] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.505 [2024-12-06 13:26:00.951966] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.505 [2024-12-06 13:26:00.952258] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:54.505 [2024-12-06 13:26:00.954270] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.505 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.505 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:54.505 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.505 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.505 13:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.764 13:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.764 13:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:55.023 [2024-12-06 13:26:01.311162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.023 13:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:55.282 13:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:55.282 13:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:55.849 13:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:55.849 13:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:56.107 13:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:56.107 13:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:56.366 13:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:56.366 13:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:56.625 13:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:57.199 13:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:57.199 13:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:57.457 13:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:57.457 13:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:57.714 13:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:57.714 13:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:57.972 13:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:58.230 13:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:58.230 13:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:58.486 13:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:58.486 13:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:58.743 13:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:58.999 [2024-12-06 13:26:05.519105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:59.256 13:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:59.513 13:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:59.770 13:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:31:59.770 13:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:59.770 13:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:59.770 13:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:59.770 13:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:59.770 13:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:59.770 13:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:02.310 13:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:02.310 13:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:02.310 13:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:02.310 13:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:02.310 13:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:02.310 13:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:02.310 13:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:02.310 [global] 00:32:02.310 thread=1 00:32:02.310 invalidate=1 00:32:02.310 rw=write 00:32:02.310 time_based=1 00:32:02.310 runtime=1 00:32:02.310 ioengine=libaio 00:32:02.310 direct=1 00:32:02.310 bs=4096 00:32:02.310 iodepth=1 00:32:02.310 norandommap=0 00:32:02.310 numjobs=1 00:32:02.310 00:32:02.310 verify_dump=1 00:32:02.310 verify_backlog=512 00:32:02.310 verify_state_save=0 00:32:02.310 do_verify=1 00:32:02.310 verify=crc32c-intel 00:32:02.310 [job0] 00:32:02.310 filename=/dev/nvme0n1 00:32:02.310 [job1] 00:32:02.310 filename=/dev/nvme0n2 00:32:02.310 [job2] 00:32:02.310 filename=/dev/nvme0n3 00:32:02.310 [job3] 00:32:02.310 filename=/dev/nvme0n4 00:32:02.310 Could not set queue depth (nvme0n1) 00:32:02.310 Could not set queue depth (nvme0n2) 00:32:02.310 Could not set queue depth (nvme0n3) 00:32:02.310 Could not set queue depth (nvme0n4) 00:32:02.310 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.310 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.310 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.310 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.310 fio-3.35 00:32:02.310 Starting 4 threads 00:32:03.246 00:32:03.246 job0: (groupid=0, jobs=1): err= 0: pid=106110: Fri Dec 6 13:26:09 2024 00:32:03.246 read: IOPS=2462, BW=9848KiB/s (10.1MB/s)(9868KiB/1002msec) 00:32:03.246 slat (nsec): min=13434, max=54521, avg=16021.51, stdev=3056.09 00:32:03.246 clat (usec): min=181, max=2188, avg=208.06, stdev=43.70 00:32:03.246 lat (usec): min=196, max=2216, avg=224.09, stdev=44.02 00:32:03.246 clat percentiles (usec): 00:32:03.246 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:32:03.246 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 208], 00:32:03.246 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 227], 00:32:03.246 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 570], 99.95th=[ 791], 00:32:03.246 | 99.99th=[ 2180] 00:32:03.246 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:32:03.246 slat (usec): min=19, max=120, avg=23.76, stdev= 5.00 00:32:03.246 clat (usec): min=125, max=534, avg=148.06, stdev=13.33 00:32:03.246 lat (usec): min=147, max=560, avg=171.82, stdev=14.87 00:32:03.246 clat percentiles (usec): 00:32:03.246 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:32:03.246 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:32:03.246 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:32:03.246 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 265], 99.95th=[ 285], 00:32:03.246 | 99.99th=[ 537] 00:32:03.246 bw ( KiB/s): min= 8416, max=12064, per=29.69%, avg=10240.00, stdev=2579.53, samples=2 00:32:03.246 iops : min= 2104, max= 3016, avg=2560.00, stdev=644.88, samples=2 00:32:03.247 lat (usec) : 250=99.76%, 500=0.16%, 750=0.04%, 1000=0.02% 00:32:03.247 lat (msec) : 4=0.02% 00:32:03.247 cpu : usr=2.10%, sys=7.29%, ctx=5027, majf=0, minf=9 00:32:03.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 issued rwts: total=2467,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.247 job1: (groupid=0, jobs=1): err= 0: pid=106111: Fri Dec 6 13:26:09 2024 00:32:03.247 read: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec) 00:32:03.247 slat (nsec): min=13969, max=57571, avg=21325.25, stdev=5840.93 00:32:03.247 clat (usec): min=173, max=1316, avg=320.08, stdev=59.72 00:32:03.247 lat (usec): min=198, max=1347, avg=341.40, stdev=60.26 00:32:03.247 clat percentiles (usec): 00:32:03.247 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 285], 20.00th=[ 302], 00:32:03.247 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:32:03.247 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 433], 00:32:03.247 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 1237], 99.95th=[ 1319], 00:32:03.247 | 99.99th=[ 1319] 00:32:03.247 write: IOPS=1761, BW=7044KiB/s (7213kB/s)(7044KiB/1000msec); 0 zone resets 00:32:03.247 slat (nsec): min=23942, max=96137, avg=33194.02, stdev=7002.52 00:32:03.247 clat (usec): min=133, max=770, avg=232.21, stdev=23.58 00:32:03.247 lat (usec): min=162, max=822, avg=265.40, stdev=22.83 00:32:03.247 clat percentiles (usec): 00:32:03.247 | 1.00th=[ 190], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:32:03.247 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 235], 00:32:03.247 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:32:03.247 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 545], 99.95th=[ 775], 00:32:03.247 | 99.99th=[ 775] 00:32:03.247 bw ( KiB/s): min= 8192, max= 8192, per=23.75%, avg=8192.00, stdev= 0.00, samples=1 00:32:03.247 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:03.247 lat (usec) : 250=51.02%, 500=48.86%, 750=0.03%, 1000=0.03% 00:32:03.247 lat (msec) : 2=0.06% 00:32:03.247 cpu : usr=1.40%, sys=7.30%, ctx=3297, majf=0, minf=13 00:32:03.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 issued rwts: total=1536,1761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.247 job2: (groupid=0, jobs=1): err= 0: pid=106112: Fri Dec 6 13:26:09 2024 00:32:03.247 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:32:03.247 slat (nsec): min=16730, max=94094, avg=22448.24, stdev=5699.43 00:32:03.247 clat (usec): min=201, max=2640, avg=320.27, stdev=71.94 00:32:03.247 lat (usec): min=227, max=2681, avg=342.72, stdev=72.66 00:32:03.247 clat percentiles (usec): 00:32:03.247 | 1.00th=[ 215], 5.00th=[ 239], 10.00th=[ 289], 20.00th=[ 302], 00:32:03.247 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:32:03.247 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 412], 00:32:03.247 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 742], 99.95th=[ 2638], 00:32:03.247 | 99.99th=[ 2638] 00:32:03.247 write: IOPS=1756, BW=7025KiB/s (7194kB/s)(7032KiB/1001msec); 0 zone resets 00:32:03.247 slat (usec): min=24, max=128, avg=32.84, stdev= 7.21 00:32:03.247 clat (usec): min=127, max=581, avg=231.81, stdev=19.32 00:32:03.247 lat (usec): min=167, max=618, avg=264.65, stdev=18.80 00:32:03.247 clat percentiles (usec): 00:32:03.247 | 1.00th=[ 184], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:32:03.247 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 235], 00:32:03.247 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:32:03.247 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 314], 99.95th=[ 578], 00:32:03.247 | 99.99th=[ 578] 00:32:03.247 bw ( KiB/s): min= 8192, max= 8192, per=23.75%, avg=8192.00, stdev= 0.00, samples=1 00:32:03.247 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:03.247 lat (usec) : 250=49.76%, 500=50.15%, 750=0.06% 00:32:03.247 lat (msec) : 4=0.03% 00:32:03.247 cpu : usr=1.90%, sys=6.80%, ctx=3296, majf=0, minf=13 00:32:03.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 issued rwts: total=1536,1758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.247 job3: (groupid=0, jobs=1): err= 0: pid=106113: Fri Dec 6 13:26:09 2024 00:32:03.247 read: IOPS=2512, BW=9.82MiB/s (10.3MB/s)(9.84MiB/1002msec) 00:32:03.247 slat (nsec): min=13669, max=50956, avg=17000.32, stdev=4556.95 00:32:03.247 clat (usec): min=171, max=2022, avg=199.47, stdev=37.82 00:32:03.247 lat (usec): min=186, max=2037, avg=216.47, stdev=38.22 00:32:03.247 clat percentiles (usec): 00:32:03.247 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 00:32:03.247 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:32:03.247 | 70.00th=[ 204], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 217], 00:32:03.247 | 99.00th=[ 227], 99.50th=[ 231], 99.90th=[ 262], 99.95th=[ 314], 00:32:03.247 | 99.99th=[ 2024] 00:32:03.247 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:32:03.247 slat (nsec): min=19394, max=81025, avg=26251.00, stdev=8067.41 00:32:03.247 clat (usec): min=127, max=2743, avg=148.11, stdev=52.41 00:32:03.247 lat (usec): min=147, max=2785, avg=174.36, stdev=53.77 00:32:03.247 clat percentiles (usec): 00:32:03.247 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:32:03.247 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:32:03.247 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:32:03.247 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 281], 99.95th=[ 326], 00:32:03.247 | 99.99th=[ 2737] 00:32:03.247 bw ( KiB/s): min= 8512, max=11968, per=29.69%, avg=10240.00, stdev=2443.76, samples=2 00:32:03.247 iops : min= 2128, max= 2992, avg=2560.00, stdev=610.94, samples=2 00:32:03.247 lat (usec) : 250=99.88%, 500=0.08% 00:32:03.247 lat (msec) : 4=0.04% 00:32:03.247 cpu : usr=1.50%, sys=8.89%, ctx=5078, majf=0, minf=9 00:32:03.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.247 issued rwts: total=2518,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.247 00:32:03.247 Run status group 0 (all jobs): 00:32:03.247 READ: bw=31.4MiB/s (32.9MB/s), 6138KiB/s-9.82MiB/s (6285kB/s-10.3MB/s), io=31.5MiB (33.0MB), run=1000-1002msec 00:32:03.247 WRITE: bw=33.7MiB/s (35.3MB/s), 7025KiB/s-9.98MiB/s (7194kB/s-10.5MB/s), io=33.7MiB (35.4MB), run=1000-1002msec 00:32:03.247 00:32:03.247 Disk stats (read/write): 00:32:03.247 nvme0n1: ios=2098/2319, merge=0/0, ticks=583/359, in_queue=942, util=92.59% 00:32:03.247 nvme0n2: ios=1365/1536, merge=0/0, ticks=462/374, in_queue=836, util=89.28% 00:32:03.247 nvme0n3: ios=1312/1536, merge=0/0, ticks=424/366, in_queue=790, util=88.98% 00:32:03.247 nvme0n4: ios=2048/2362, merge=0/0, ticks=418/368, in_queue=786, util=89.74% 00:32:03.247 13:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:03.247 [global] 00:32:03.247 thread=1 00:32:03.247 invalidate=1 00:32:03.247 rw=randwrite 00:32:03.247 time_based=1 00:32:03.247 runtime=1 00:32:03.247 ioengine=libaio 00:32:03.247 direct=1 00:32:03.247 bs=4096 00:32:03.247 iodepth=1 00:32:03.247 norandommap=0 00:32:03.247 numjobs=1 00:32:03.247 00:32:03.247 verify_dump=1 00:32:03.247 verify_backlog=512 00:32:03.247 verify_state_save=0 00:32:03.247 do_verify=1 00:32:03.247 verify=crc32c-intel 00:32:03.247 [job0] 00:32:03.247 filename=/dev/nvme0n1 00:32:03.247 [job1] 00:32:03.247 filename=/dev/nvme0n2 00:32:03.247 [job2] 00:32:03.247 filename=/dev/nvme0n3 00:32:03.247 [job3] 00:32:03.247 filename=/dev/nvme0n4 00:32:03.247 Could not set queue depth (nvme0n1) 00:32:03.247 Could not set queue depth (nvme0n2) 00:32:03.247 Could not set queue depth (nvme0n3) 00:32:03.247 Could not set queue depth (nvme0n4) 00:32:03.531 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.532 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.532 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.532 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:03.532 fio-3.35 00:32:03.532 Starting 4 threads 00:32:04.466 00:32:04.466 job0: (groupid=0, jobs=1): err= 0: pid=106172: Fri Dec 6 13:26:10 2024 00:32:04.466 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:32:04.466 slat (nsec): min=16255, max=89173, avg=22536.61, stdev=5418.31 00:32:04.466 clat (usec): min=191, max=1164, avg=325.04, stdev=67.38 00:32:04.466 lat (usec): min=208, max=1190, avg=347.58, stdev=67.75 00:32:04.466 clat percentiles (usec): 00:32:04.466 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 302], 00:32:04.466 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:32:04.466 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 429], 95.00th=[ 445], 00:32:04.466 | 99.00th=[ 474], 99.50th=[ 486], 99.90th=[ 996], 99.95th=[ 1172], 00:32:04.466 | 99.99th=[ 1172] 00:32:04.466 write: IOPS=1716, BW=6865KiB/s (7030kB/s)(6872KiB/1001msec); 0 zone resets 00:32:04.466 slat (usec): min=23, max=120, avg=34.65, stdev=10.98 00:32:04.466 clat (usec): min=119, max=928, avg=231.59, stdev=30.39 00:32:04.466 lat (usec): min=142, max=963, avg=266.24, stdev=29.08 00:32:04.466 clat percentiles (usec): 00:32:04.466 | 1.00th=[ 145], 5.00th=[ 196], 10.00th=[ 208], 20.00th=[ 219], 00:32:04.466 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:32:04.466 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:32:04.466 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 619], 99.95th=[ 930], 00:32:04.466 | 99.99th=[ 930] 00:32:04.466 bw ( KiB/s): min= 8192, max= 8192, per=23.88%, avg=8192.00, stdev= 0.00, samples=1 00:32:04.466 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:04.466 lat (usec) : 250=51.35%, 500=48.43%, 750=0.12%, 1000=0.06% 00:32:04.466 lat (msec) : 2=0.03% 00:32:04.466 cpu : usr=1.80%, sys=7.20%, ctx=3254, majf=0, minf=17 00:32:04.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.466 issued rwts: total=1536,1718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:04.466 job1: (groupid=0, jobs=1): err= 0: pid=106173: Fri Dec 6 13:26:10 2024 00:32:04.466 read: IOPS=2435, BW=9742KiB/s (9976kB/s)(9752KiB/1001msec) 00:32:04.466 slat (nsec): min=12909, max=42296, avg=16127.82, stdev=3282.19 00:32:04.466 clat (usec): min=177, max=2186, avg=209.42, stdev=42.12 00:32:04.466 lat (usec): min=196, max=2200, avg=225.55, stdev=42.25 00:32:04.466 clat percentiles (usec): 00:32:04.466 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:32:04.466 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 210], 00:32:04.466 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 231], 00:32:04.466 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 293], 99.95th=[ 314], 00:32:04.466 | 99.99th=[ 2180] 00:32:04.466 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:04.466 slat (usec): min=19, max=103, avg=23.04, stdev= 5.31 00:32:04.466 clat (usec): min=121, max=1838, avg=149.21, stdev=38.15 00:32:04.466 lat (usec): min=146, max=1858, avg=172.25, stdev=38.83 00:32:04.466 clat percentiles (usec): 00:32:04.466 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:32:04.466 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:32:04.466 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 00:32:04.466 | 99.00th=[ 184], 99.50th=[ 204], 99.90th=[ 586], 99.95th=[ 635], 00:32:04.466 | 99.99th=[ 1844] 00:32:04.467 bw ( KiB/s): min=12288, max=12288, per=35.83%, avg=12288.00, stdev= 0.00, samples=1 00:32:04.467 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:04.467 lat (usec) : 250=99.54%, 500=0.38%, 750=0.04% 00:32:04.467 lat (msec) : 2=0.02%, 4=0.02% 00:32:04.467 cpu : usr=2.00%, sys=7.30%, ctx=5005, majf=0, minf=9 00:32:04.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.467 issued rwts: total=2438,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:04.467 job2: (groupid=0, jobs=1): err= 0: pid=106174: Fri Dec 6 13:26:10 2024 00:32:04.467 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:32:04.467 slat (nsec): min=16749, max=91107, avg=22539.83, stdev=5131.89 00:32:04.467 clat (usec): min=202, max=567, avg=323.69, stdev=55.25 00:32:04.467 lat (usec): min=222, max=585, avg=346.23, stdev=55.56 00:32:04.467 clat percentiles (usec): 00:32:04.467 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 297], 00:32:04.467 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:32:04.467 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 412], 95.00th=[ 433], 00:32:04.467 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[ 482], 99.95th=[ 570], 00:32:04.467 | 99.99th=[ 570] 00:32:04.467 write: IOPS=1704, BW=6817KiB/s (6981kB/s)(6824KiB/1001msec); 0 zone resets 00:32:04.467 slat (usec): min=24, max=257, avg=33.55, stdev=10.78 00:32:04.467 clat (usec): min=130, max=2736, avg=236.01, stdev=66.93 00:32:04.467 lat (usec): min=165, max=2818, avg=269.56, stdev=67.80 00:32:04.467 clat percentiles (usec): 00:32:04.467 | 1.00th=[ 165], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 221], 00:32:04.467 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 239], 00:32:04.467 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 265], 00:32:04.467 | 99.00th=[ 289], 99.50th=[ 338], 99.90th=[ 930], 99.95th=[ 2737], 00:32:04.467 | 99.99th=[ 2737] 00:32:04.467 bw ( KiB/s): min= 8192, max= 8192, per=23.88%, avg=8192.00, stdev= 0.00, samples=1 00:32:04.467 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:04.467 lat (usec) : 250=49.91%, 500=49.97%, 750=0.06%, 1000=0.03% 00:32:04.467 lat (msec) : 4=0.03% 00:32:04.467 cpu : usr=2.00%, sys=6.80%, ctx=3244, majf=0, minf=7 00:32:04.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.467 issued rwts: total=1536,1706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:04.467 job3: (groupid=0, jobs=1): err= 0: pid=106175: Fri Dec 6 13:26:10 2024 00:32:04.467 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:32:04.467 slat (nsec): min=13529, max=48439, avg=15597.12, stdev=2852.74 00:32:04.467 clat (usec): min=174, max=254, avg=198.78, stdev=10.35 00:32:04.467 lat (usec): min=189, max=268, avg=214.38, stdev=10.60 00:32:04.467 clat percentiles (usec): 00:32:04.467 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:32:04.467 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:32:04.467 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 212], 95.00th=[ 219], 00:32:04.467 | 99.00th=[ 227], 99.50th=[ 231], 99.90th=[ 241], 99.95th=[ 247], 00:32:04.467 | 99.99th=[ 255] 00:32:04.467 write: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec); 0 zone resets 00:32:04.467 slat (nsec): min=19404, max=97066, avg=22754.83, stdev=7156.63 00:32:04.467 clat (usec): min=126, max=1880, avg=147.35, stdev=35.88 00:32:04.467 lat (usec): min=147, max=1901, avg=170.11, stdev=36.97 00:32:04.467 clat percentiles (usec): 00:32:04.467 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:32:04.467 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:32:04.467 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 163], 00:32:04.467 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 293], 99.95th=[ 441], 00:32:04.467 | 99.99th=[ 1876] 00:32:04.467 bw ( KiB/s): min=12288, max=12288, per=35.83%, avg=12288.00, stdev= 0.00, samples=1 00:32:04.467 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:04.467 lat (usec) : 250=99.90%, 500=0.08% 00:32:04.467 lat (msec) : 2=0.02% 00:32:04.467 cpu : usr=1.70%, sys=7.60%, ctx=5160, majf=0, minf=16 00:32:04.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.467 issued rwts: total=2560,2599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:04.467 00:32:04.467 Run status group 0 (all jobs): 00:32:04.467 READ: bw=31.5MiB/s (33.0MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.5MiB (33.1MB), run=1001-1001msec 00:32:04.467 WRITE: bw=33.5MiB/s (35.1MB/s), 6817KiB/s-10.1MiB/s (6981kB/s-10.6MB/s), io=33.5MiB (35.2MB), run=1001-1001msec 00:32:04.467 00:32:04.467 Disk stats (read/write): 00:32:04.467 nvme0n1: ios=1313/1536, merge=0/0, ticks=452/374, in_queue=826, util=87.37% 00:32:04.467 nvme0n2: ios=2083/2233, merge=0/0, ticks=490/335, in_queue=825, util=88.31% 00:32:04.467 nvme0n3: ios=1252/1536, merge=0/0, ticks=412/374, in_queue=786, util=88.97% 00:32:04.467 nvme0n4: ios=2048/2399, merge=0/0, ticks=415/380, in_queue=795, util=89.63% 00:32:04.467 13:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:04.726 [global] 00:32:04.726 thread=1 00:32:04.726 invalidate=1 00:32:04.726 rw=write 00:32:04.726 time_based=1 00:32:04.726 runtime=1 00:32:04.726 ioengine=libaio 00:32:04.726 direct=1 00:32:04.726 bs=4096 00:32:04.726 iodepth=128 00:32:04.726 norandommap=0 00:32:04.726 numjobs=1 00:32:04.726 00:32:04.726 verify_dump=1 00:32:04.726 verify_backlog=512 00:32:04.726 verify_state_save=0 00:32:04.726 do_verify=1 00:32:04.726 verify=crc32c-intel 00:32:04.726 [job0] 00:32:04.726 filename=/dev/nvme0n1 00:32:04.726 [job1] 00:32:04.726 filename=/dev/nvme0n2 00:32:04.726 [job2] 00:32:04.726 filename=/dev/nvme0n3 00:32:04.726 [job3] 00:32:04.726 filename=/dev/nvme0n4 00:32:04.726 Could not set queue depth (nvme0n1) 00:32:04.726 Could not set queue depth (nvme0n2) 00:32:04.726 Could not set queue depth (nvme0n3) 00:32:04.726 Could not set queue depth (nvme0n4) 00:32:04.726 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:04.726 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:04.726 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:04.726 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:04.726 fio-3.35 00:32:04.726 Starting 4 threads 00:32:06.164 00:32:06.164 job0: (groupid=0, jobs=1): err= 0: pid=106230: Fri Dec 6 13:26:12 2024 00:32:06.164 read: IOPS=3290, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1005msec) 00:32:06.164 slat (usec): min=2, max=7783, avg=151.88, stdev=665.15 00:32:06.164 clat (usec): min=1055, max=37058, avg=18853.38, stdev=7174.63 00:32:06.164 lat (usec): min=4998, max=37072, avg=19005.26, stdev=7222.25 00:32:06.164 clat percentiles (usec): 00:32:06.164 | 1.00th=[ 9110], 5.00th=[11076], 10.00th=[11469], 20.00th=[11994], 00:32:06.164 | 30.00th=[12125], 40.00th=[12649], 50.00th=[20317], 60.00th=[23200], 00:32:06.164 | 70.00th=[24511], 80.00th=[26084], 90.00th=[27395], 95.00th=[28705], 00:32:06.164 | 99.00th=[33162], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:32:06.164 | 99.99th=[36963] 00:32:06.164 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:32:06.164 slat (usec): min=11, max=6698, avg=131.72, stdev=538.66 00:32:06.164 clat (usec): min=9163, max=36760, avg=18001.82, stdev=6490.94 00:32:06.164 lat (usec): min=9178, max=36779, avg=18133.54, stdev=6524.49 00:32:06.164 clat percentiles (usec): 00:32:06.164 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10945], 00:32:06.164 | 30.00th=[11994], 40.00th=[12387], 50.00th=[20055], 60.00th=[22414], 00:32:06.164 | 70.00th=[23200], 80.00th=[23987], 90.00th=[25035], 95.00th=[26608], 00:32:06.164 | 99.00th=[30540], 99.50th=[31589], 99.90th=[36963], 99.95th=[36963], 00:32:06.164 | 99.99th=[36963] 00:32:06.164 bw ( KiB/s): min=11648, max=17024, per=21.75%, avg=14336.00, stdev=3801.41, samples=2 00:32:06.164 iops : min= 2912, max= 4256, avg=3584.00, stdev=950.35, samples=2 00:32:06.164 lat (msec) : 2=0.01%, 10=5.79%, 20=43.98%, 50=50.21% 00:32:06.164 cpu : usr=3.19%, sys=9.96%, ctx=770, majf=0, minf=12 00:32:06.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:06.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.164 issued rwts: total=3307,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.164 job1: (groupid=0, jobs=1): err= 0: pid=106231: Fri Dec 6 13:26:12 2024 00:32:06.164 read: IOPS=5088, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:32:06.164 slat (usec): min=7, max=7796, avg=87.11, stdev=446.47 00:32:06.164 clat (usec): min=6477, max=28525, avg=11728.48, stdev=2392.44 00:32:06.164 lat (usec): min=6486, max=28536, avg=11815.58, stdev=2413.64 00:32:06.164 clat percentiles (usec): 00:32:06.164 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10159], 00:32:06.164 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:32:06.164 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13435], 95.00th=[14222], 00:32:06.164 | 99.00th=[22152], 99.50th=[27395], 99.90th=[28443], 99.95th=[28443], 00:32:06.164 | 99.99th=[28443] 00:32:06.164 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:32:06.164 slat (usec): min=6, max=9969, avg=90.63, stdev=505.67 00:32:06.164 clat (usec): min=5877, max=32597, avg=11864.68, stdev=3782.24 00:32:06.164 lat (usec): min=5916, max=32656, avg=11955.31, stdev=3821.71 00:32:06.164 clat percentiles (usec): 00:32:06.164 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10290], 00:32:06.164 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:32:06.164 | 70.00th=[11207], 80.00th=[11469], 90.00th=[15008], 95.00th=[22414], 00:32:06.164 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:32:06.164 | 99.99th=[32637] 00:32:06.164 bw ( KiB/s): min=20439, max=23592, per=33.41%, avg=22015.50, stdev=2229.51, samples=2 00:32:06.164 iops : min= 5109, max= 5898, avg=5503.50, stdev=557.91, samples=2 00:32:06.164 lat (msec) : 10=13.40%, 20=81.72%, 50=4.88% 00:32:06.164 cpu : usr=4.97%, sys=14.91%, ctx=500, majf=0, minf=7 00:32:06.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:06.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.164 issued rwts: total=5124,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.165 job2: (groupid=0, jobs=1): err= 0: pid=106232: Fri Dec 6 13:26:12 2024 00:32:06.165 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:32:06.165 slat (usec): min=7, max=11189, avg=117.39, stdev=603.10 00:32:06.165 clat (usec): min=9974, max=31247, avg=15020.40, stdev=4300.43 00:32:06.165 lat (usec): min=10858, max=31276, avg=15137.79, stdev=4307.56 00:32:06.165 clat percentiles (usec): 00:32:06.165 | 1.00th=[10814], 5.00th=[11469], 10.00th=[12649], 20.00th=[13173], 00:32:06.165 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13566], 60.00th=[13829], 00:32:06.165 | 70.00th=[13829], 80.00th=[14222], 90.00th=[22938], 95.00th=[26608], 00:32:06.165 | 99.00th=[29230], 99.50th=[31065], 99.90th=[31065], 99.95th=[31327], 00:32:06.165 | 99.99th=[31327] 00:32:06.165 write: IOPS=4290, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1003msec); 0 zone resets 00:32:06.165 slat (usec): min=6, max=10169, avg=112.56, stdev=524.68 00:32:06.165 clat (usec): min=2072, max=31634, avg=15083.01, stdev=4506.72 00:32:06.165 lat (usec): min=2092, max=31702, avg=15195.57, stdev=4537.95 00:32:06.165 clat percentiles (usec): 00:32:06.165 | 1.00th=[ 5866], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:32:06.165 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13566], 60.00th=[14091], 00:32:06.165 | 70.00th=[14484], 80.00th=[19268], 90.00th=[22414], 95.00th=[24773], 00:32:06.165 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28181], 99.95th=[29230], 00:32:06.165 | 99.99th=[31589] 00:32:06.165 bw ( KiB/s): min=15655, max=17784, per=25.37%, avg=16719.50, stdev=1505.43, samples=2 00:32:06.165 iops : min= 3913, max= 4446, avg=4179.50, stdev=376.89, samples=2 00:32:06.165 lat (msec) : 4=0.30%, 10=0.86%, 20=83.55%, 50=15.30% 00:32:06.165 cpu : usr=3.39%, sys=12.67%, ctx=552, majf=0, minf=13 00:32:06.165 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:06.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.165 issued rwts: total=4096,4303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.165 job3: (groupid=0, jobs=1): err= 0: pid=106233: Fri Dec 6 13:26:12 2024 00:32:06.165 read: IOPS=2650, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1004msec) 00:32:06.165 slat (usec): min=3, max=8100, avg=180.34, stdev=767.66 00:32:06.165 clat (usec): min=2293, max=33899, avg=23012.93, stdev=5927.88 00:32:06.165 lat (usec): min=4667, max=33914, avg=23193.26, stdev=5936.41 00:32:06.165 clat percentiles (usec): 00:32:06.165 | 1.00th=[10028], 5.00th=[12649], 10.00th=[13304], 20.00th=[13829], 00:32:06.165 | 30.00th=[22676], 40.00th=[23987], 50.00th=[24773], 60.00th=[25560], 00:32:06.165 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28443], 95.00th=[30016], 00:32:06.165 | 99.00th=[32637], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:32:06.165 | 99.99th=[33817] 00:32:06.165 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:32:06.165 slat (usec): min=5, max=6722, avg=161.85, stdev=645.30 00:32:06.165 clat (usec): min=10554, max=28756, avg=21332.49, stdev=4693.42 00:32:06.165 lat (usec): min=10874, max=28774, avg=21494.34, stdev=4712.52 00:32:06.165 clat percentiles (usec): 00:32:06.165 | 1.00th=[11207], 5.00th=[11469], 10.00th=[13173], 20.00th=[17695], 00:32:06.165 | 30.00th=[20055], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:32:06.165 | 70.00th=[24249], 80.00th=[25035], 90.00th=[25822], 95.00th=[26608], 00:32:06.165 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28705], 99.95th=[28705], 00:32:06.165 | 99.99th=[28705] 00:32:06.165 bw ( KiB/s): min=12080, max=12288, per=18.49%, avg=12184.00, stdev=147.08, samples=2 00:32:06.165 iops : min= 3020, max= 3072, avg=3046.00, stdev=36.77, samples=2 00:32:06.165 lat (msec) : 4=0.02%, 10=0.40%, 20=26.32%, 50=73.26% 00:32:06.165 cpu : usr=2.19%, sys=9.17%, ctx=709, majf=0, minf=7 00:32:06.165 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:06.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.165 issued rwts: total=2661,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.165 00:32:06.165 Run status group 0 (all jobs): 00:32:06.165 READ: bw=58.9MiB/s (61.8MB/s), 10.4MiB/s-19.9MiB/s (10.9MB/s-20.8MB/s), io=59.3MiB (62.2MB), run=1003-1007msec 00:32:06.165 WRITE: bw=64.4MiB/s (67.5MB/s), 12.0MiB/s-21.8MiB/s (12.5MB/s-22.9MB/s), io=64.8MiB (68.0MB), run=1003-1007msec 00:32:06.165 00:32:06.165 Disk stats (read/write): 00:32:06.165 nvme0n1: ios=3106/3076, merge=0/0, ticks=13601/11374, in_queue=24975, util=88.37% 00:32:06.165 nvme0n2: ios=4560/4608, merge=0/0, ticks=24876/23192, in_queue=48068, util=88.46% 00:32:06.165 nvme0n3: ios=3447/3584, merge=0/0, ticks=13609/14816, in_queue=28425, util=88.96% 00:32:06.165 nvme0n4: ios=2398/2560, merge=0/0, ticks=13295/12444, in_queue=25739, util=89.50% 00:32:06.165 13:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:06.165 [global] 00:32:06.165 thread=1 00:32:06.165 invalidate=1 00:32:06.165 rw=randwrite 00:32:06.165 time_based=1 00:32:06.165 runtime=1 00:32:06.165 ioengine=libaio 00:32:06.165 direct=1 00:32:06.165 bs=4096 00:32:06.165 iodepth=128 00:32:06.165 norandommap=0 00:32:06.165 numjobs=1 00:32:06.165 00:32:06.165 verify_dump=1 00:32:06.165 verify_backlog=512 00:32:06.165 verify_state_save=0 00:32:06.165 do_verify=1 00:32:06.165 verify=crc32c-intel 00:32:06.165 [job0] 00:32:06.165 filename=/dev/nvme0n1 00:32:06.165 [job1] 00:32:06.165 filename=/dev/nvme0n2 00:32:06.165 [job2] 00:32:06.165 filename=/dev/nvme0n3 00:32:06.165 [job3] 00:32:06.165 filename=/dev/nvme0n4 00:32:06.165 Could not set queue depth (nvme0n1) 00:32:06.165 Could not set queue depth (nvme0n2) 00:32:06.165 Could not set queue depth (nvme0n3) 00:32:06.165 Could not set queue depth (nvme0n4) 00:32:06.165 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.165 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.165 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.165 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:06.165 fio-3.35 00:32:06.165 Starting 4 threads 00:32:07.544 00:32:07.544 job0: (groupid=0, jobs=1): err= 0: pid=106286: Fri Dec 6 13:26:13 2024 00:32:07.544 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:32:07.544 slat (usec): min=6, max=4018, avg=72.49, stdev=346.94 00:32:07.544 clat (usec): min=6225, max=13496, avg=9570.77, stdev=1176.02 00:32:07.544 lat (usec): min=6442, max=13530, avg=9643.26, stdev=1186.15 00:32:07.544 clat percentiles (usec): 00:32:07.544 | 1.00th=[ 7111], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8586], 00:32:07.544 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:32:07.544 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11076], 95.00th=[11600], 00:32:07.544 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13435], 99.95th=[13435], 00:32:07.544 | 99.99th=[13435] 00:32:07.544 write: IOPS=6833, BW=26.7MiB/s (28.0MB/s)(26.8MiB/1005msec); 0 zone resets 00:32:07.544 slat (usec): min=9, max=3613, avg=68.05, stdev=299.94 00:32:07.544 clat (usec): min=4421, max=14124, avg=9246.65, stdev=1043.64 00:32:07.544 lat (usec): min=4436, max=14144, avg=9314.69, stdev=1066.76 00:32:07.544 clat percentiles (usec): 00:32:07.544 | 1.00th=[ 6259], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[ 8717], 00:32:07.544 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:32:07.544 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10945], 00:32:07.544 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13698], 99.95th=[13829], 00:32:07.544 | 99.99th=[14091] 00:32:07.544 bw ( KiB/s): min=25368, max=28560, per=51.21%, avg=26964.00, stdev=2257.08, samples=2 00:32:07.544 iops : min= 6342, max= 7140, avg=6741.00, stdev=564.27, samples=2 00:32:07.544 lat (msec) : 10=77.97%, 20=22.03% 00:32:07.544 cpu : usr=5.68%, sys=16.83%, ctx=728, majf=0, minf=13 00:32:07.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:32:07.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:07.544 issued rwts: total=6656,6868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:07.544 job1: (groupid=0, jobs=1): err= 0: pid=106287: Fri Dec 6 13:26:13 2024 00:32:07.544 read: IOPS=1336, BW=5347KiB/s (5475kB/s)(5384KiB/1007msec) 00:32:07.544 slat (usec): min=4, max=60911, avg=312.56, stdev=2098.24 00:32:07.544 clat (msec): min=6, max=101, avg=37.02, stdev=19.61 00:32:07.544 lat (msec): min=6, max=103, avg=37.34, stdev=19.72 00:32:07.544 clat percentiles (msec): 00:32:07.544 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 22], 20.00th=[ 23], 00:32:07.544 | 30.00th=[ 26], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 37], 00:32:07.544 | 70.00th=[ 41], 80.00th=[ 45], 90.00th=[ 52], 95.00th=[ 89], 00:32:07.544 | 99.00th=[ 97], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:32:07.544 | 99.99th=[ 103] 00:32:07.544 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:32:07.544 slat (usec): min=5, max=41556, avg=368.78, stdev=1830.32 00:32:07.544 clat (msec): min=15, max=122, avg=50.63, stdev=23.64 00:32:07.544 lat (msec): min=16, max=123, avg=51.00, stdev=23.77 00:32:07.544 clat percentiles (msec): 00:32:07.544 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 29], 20.00th=[ 35], 00:32:07.544 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 44], 60.00th=[ 44], 00:32:07.544 | 70.00th=[ 52], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 108], 00:32:07.544 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 120], 99.95th=[ 123], 00:32:07.544 | 99.99th=[ 123] 00:32:07.544 bw ( KiB/s): min= 5360, max= 6928, per=11.67%, avg=6144.00, stdev=1108.74, samples=2 00:32:07.544 iops : min= 1340, max= 1732, avg=1536.00, stdev=277.19, samples=2 00:32:07.544 lat (msec) : 10=0.42%, 20=5.00%, 50=72.69%, 100=18.88%, 250=3.02% 00:32:07.544 cpu : usr=1.39%, sys=4.37%, ctx=427, majf=0, minf=9 00:32:07.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:32:07.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:07.544 issued rwts: total=1346,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:07.544 job2: (groupid=0, jobs=1): err= 0: pid=106288: Fri Dec 6 13:26:13 2024 00:32:07.544 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1020msec) 00:32:07.544 slat (usec): min=4, max=14788, avg=147.10, stdev=932.45 00:32:07.544 clat (usec): min=5626, max=82920, avg=18208.13, stdev=10544.18 00:32:07.544 lat (usec): min=5640, max=82948, avg=18355.23, stdev=10627.93 00:32:07.544 clat percentiles (usec): 00:32:07.544 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11994], 00:32:07.544 | 30.00th=[12387], 40.00th=[13566], 50.00th=[15533], 60.00th=[17171], 00:32:07.544 | 70.00th=[19792], 80.00th=[21627], 90.00th=[27395], 95.00th=[30016], 00:32:07.544 | 99.00th=[74974], 99.50th=[78119], 99.90th=[82314], 99.95th=[83362], 00:32:07.544 | 99.99th=[83362] 00:32:07.544 write: IOPS=3417, BW=13.3MiB/s (14.0MB/s)(13.6MiB/1020msec); 0 zone resets 00:32:07.544 slat (usec): min=5, max=16424, avg=148.80, stdev=832.57 00:32:07.544 clat (usec): min=4262, max=82781, avg=21053.84, stdev=12013.25 00:32:07.544 lat (usec): min=4292, max=82789, avg=21202.64, stdev=12073.09 00:32:07.544 clat percentiles (usec): 00:32:07.544 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11207], 00:32:07.544 | 30.00th=[12387], 40.00th=[17433], 50.00th=[20579], 60.00th=[21627], 00:32:07.544 | 70.00th=[22152], 80.00th=[23200], 90.00th=[34866], 95.00th=[47973], 00:32:07.544 | 99.00th=[68682], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:32:07.544 | 99.99th=[82314] 00:32:07.544 bw ( KiB/s): min=13184, max=13680, per=25.51%, avg=13432.00, stdev=350.72, samples=2 00:32:07.544 iops : min= 3296, max= 3420, avg=3358.00, stdev=87.68, samples=2 00:32:07.545 lat (msec) : 10=6.79%, 20=52.26%, 50=37.60%, 100=3.35% 00:32:07.545 cpu : usr=3.34%, sys=9.22%, ctx=364, majf=0, minf=13 00:32:07.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:07.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:07.545 issued rwts: total=3072,3486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:07.545 job3: (groupid=0, jobs=1): err= 0: pid=106289: Fri Dec 6 13:26:13 2024 00:32:07.545 read: IOPS=1265, BW=5064KiB/s (5185kB/s)(5084KiB/1004msec) 00:32:07.545 slat (usec): min=2, max=18198, avg=300.81, stdev=1586.44 00:32:07.545 clat (msec): min=2, max=119, avg=35.07, stdev=19.57 00:32:07.545 lat (msec): min=6, max=122, avg=35.37, stdev=19.72 00:32:07.545 clat percentiles (msec): 00:32:07.545 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 22], 20.00th=[ 23], 00:32:07.545 | 30.00th=[ 25], 40.00th=[ 28], 50.00th=[ 31], 60.00th=[ 37], 00:32:07.545 | 70.00th=[ 39], 80.00th=[ 42], 90.00th=[ 51], 95.00th=[ 86], 00:32:07.545 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:32:07.545 | 99.99th=[ 121] 00:32:07.545 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:32:07.545 slat (usec): min=5, max=30249, avg=394.69, stdev=1626.61 00:32:07.545 clat (msec): min=25, max=119, avg=52.32, stdev=23.04 00:32:07.545 lat (msec): min=25, max=120, avg=52.72, stdev=23.14 00:32:07.545 clat percentiles (msec): 00:32:07.545 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 35], 00:32:07.545 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 44], 60.00th=[ 44], 00:32:07.545 | 70.00th=[ 57], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 103], 00:32:07.545 | 99.00th=[ 114], 99.50th=[ 115], 99.90th=[ 120], 99.95th=[ 120], 00:32:07.545 | 99.99th=[ 120] 00:32:07.545 bw ( KiB/s): min= 5944, max= 6344, per=11.67%, avg=6144.00, stdev=282.84, samples=2 00:32:07.545 iops : min= 1486, max= 1586, avg=1536.00, stdev=70.71, samples=2 00:32:07.545 lat (msec) : 4=0.04%, 10=0.86%, 20=3.35%, 50=72.92%, 100=18.70% 00:32:07.545 lat (msec) : 250=4.13% 00:32:07.545 cpu : usr=1.60%, sys=4.19%, ctx=422, majf=0, minf=11 00:32:07.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:32:07.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:07.545 issued rwts: total=1271,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:07.545 00:32:07.545 Run status group 0 (all jobs): 00:32:07.545 READ: bw=47.3MiB/s (49.6MB/s), 5064KiB/s-25.9MiB/s (5185kB/s-27.1MB/s), io=48.2MiB (50.6MB), run=1004-1020msec 00:32:07.545 WRITE: bw=51.4MiB/s (53.9MB/s), 6101KiB/s-26.7MiB/s (6248kB/s-28.0MB/s), io=52.4MiB (55.0MB), run=1004-1020msec 00:32:07.545 00:32:07.545 Disk stats (read/write): 00:32:07.545 nvme0n1: ios=5682/5933, merge=0/0, ticks=25516/23305, in_queue=48821, util=87.78% 00:32:07.545 nvme0n2: ios=1068/1422, merge=0/0, ticks=20444/32421, in_queue=52865, util=88.53% 00:32:07.545 nvme0n3: ios=2560/3039, merge=0/0, ticks=46091/56942, in_queue=103033, util=89.13% 00:32:07.545 nvme0n4: ios=1024/1308, merge=0/0, ticks=18644/32451, in_queue=51095, util=87.81% 00:32:07.545 13:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:07.545 13:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=106308 00:32:07.545 13:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:07.545 13:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:07.545 [global] 00:32:07.545 thread=1 00:32:07.545 invalidate=1 00:32:07.545 rw=read 00:32:07.545 time_based=1 00:32:07.545 runtime=10 00:32:07.545 ioengine=libaio 00:32:07.545 direct=1 00:32:07.545 bs=4096 00:32:07.545 iodepth=1 00:32:07.545 norandommap=1 00:32:07.545 numjobs=1 00:32:07.545 00:32:07.545 [job0] 00:32:07.545 filename=/dev/nvme0n1 00:32:07.545 [job1] 00:32:07.545 filename=/dev/nvme0n2 00:32:07.545 [job2] 00:32:07.545 filename=/dev/nvme0n3 00:32:07.545 [job3] 00:32:07.545 filename=/dev/nvme0n4 00:32:07.545 Could not set queue depth (nvme0n1) 00:32:07.545 Could not set queue depth (nvme0n2) 00:32:07.545 Could not set queue depth (nvme0n3) 00:32:07.545 Could not set queue depth (nvme0n4) 00:32:07.545 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.545 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.545 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.545 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.545 fio-3.35 00:32:07.545 Starting 4 threads 00:32:10.830 13:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:10.830 fio: pid=106351, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:10.830 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=51302400, buflen=4096 00:32:10.830 13:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:11.089 fio: pid=106350, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:11.089 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=58654720, buflen=4096 00:32:11.089 13:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:11.089 13:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:11.348 fio: pid=106348, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:11.348 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2301952, buflen=4096 00:32:11.348 13:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:11.348 13:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:11.607 fio: pid=106349, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:11.607 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3047424, buflen=4096 00:32:11.607 00:32:11.607 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106348: Fri Dec 6 13:26:18 2024 00:32:11.607 read: IOPS=4753, BW=18.6MiB/s (19.5MB/s)(66.2MiB/3565msec) 00:32:11.607 slat (usec): min=10, max=12810, avg=17.59, stdev=160.22 00:32:11.607 clat (usec): min=154, max=2873, avg=191.41, stdev=52.19 00:32:11.607 lat (usec): min=168, max=13052, avg=209.01, stdev=170.64 00:32:11.607 clat percentiles (usec): 00:32:11.607 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:32:11.607 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:32:11.607 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 231], 00:32:11.607 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 816], 99.95th=[ 1303], 00:32:11.607 | 99.99th=[ 2114] 00:32:11.607 bw ( KiB/s): min=19648, max=20016, per=31.45%, avg=19889.33, stdev=127.94, samples=6 00:32:11.607 iops : min= 4912, max= 5004, avg=4972.33, stdev=31.99, samples=6 00:32:11.607 lat (usec) : 250=95.20%, 500=4.60%, 750=0.07%, 1000=0.05% 00:32:11.607 lat (msec) : 2=0.05%, 4=0.02% 00:32:11.607 cpu : usr=1.37%, sys=5.64%, ctx=16952, majf=0, minf=1 00:32:11.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 issued rwts: total=16947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:11.607 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106349: Fri Dec 6 13:26:18 2024 00:32:11.607 read: IOPS=4445, BW=17.4MiB/s (18.2MB/s)(66.9MiB/3853msec) 00:32:11.607 slat (usec): min=11, max=14685, avg=18.76, stdev=161.20 00:32:11.607 clat (usec): min=3, max=16209, avg=204.64, stdev=146.40 00:32:11.607 lat (usec): min=167, max=16227, avg=223.40, stdev=218.60 00:32:11.607 clat percentiles (usec): 00:32:11.607 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:32:11.607 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:32:11.607 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 314], 00:32:11.607 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 545], 99.95th=[ 1029], 00:32:11.607 | 99.99th=[ 7308] 00:32:11.607 bw ( KiB/s): min=12816, max=19440, per=27.98%, avg=17693.00, stdev=2418.16, samples=7 00:32:11.607 iops : min= 3204, max= 4860, avg=4423.14, stdev=604.62, samples=7 00:32:11.607 lat (usec) : 4=0.02%, 100=0.01%, 250=93.32%, 500=6.52%, 750=0.06% 00:32:11.607 lat (usec) : 1000=0.02% 00:32:11.607 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01% 00:32:11.607 cpu : usr=1.19%, sys=6.05%, ctx=17160, majf=0, minf=2 00:32:11.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 issued rwts: total=17129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:11.607 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106350: Fri Dec 6 13:26:18 2024 00:32:11.607 read: IOPS=4338, BW=16.9MiB/s (17.8MB/s)(55.9MiB/3301msec) 00:32:11.607 slat (usec): min=11, max=9423, avg=18.45, stdev=101.63 00:32:11.607 clat (usec): min=170, max=2377, avg=210.54, stdev=54.81 00:32:11.607 lat (usec): min=184, max=9694, avg=228.99, stdev=116.01 00:32:11.607 clat percentiles (usec): 00:32:11.607 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:32:11.607 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:32:11.607 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 285], 00:32:11.607 | 99.00th=[ 322], 99.50th=[ 388], 99.90th=[ 799], 99.95th=[ 1532], 00:32:11.607 | 99.99th=[ 2073] 00:32:11.607 bw ( KiB/s): min=17712, max=18136, per=28.30%, avg=17898.67, stdev=168.85, samples=6 00:32:11.607 iops : min= 4428, max= 4534, avg=4474.67, stdev=42.21, samples=6 00:32:11.607 lat (usec) : 250=92.93%, 500=6.83%, 750=0.12%, 1000=0.03% 00:32:11.607 lat (msec) : 2=0.06%, 4=0.03% 00:32:11.607 cpu : usr=1.12%, sys=6.21%, ctx=14332, majf=0, minf=1 00:32:11.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 issued rwts: total=14321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:11.607 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106351: Fri Dec 6 13:26:18 2024 00:32:11.607 read: IOPS=4161, BW=16.3MiB/s (17.0MB/s)(48.9MiB/3010msec) 00:32:11.607 slat (usec): min=11, max=109, avg=15.47, stdev= 3.18 00:32:11.607 clat (usec): min=124, max=2761, avg=223.32, stdev=52.15 00:32:11.607 lat (usec): min=191, max=2778, avg=238.78, stdev=52.12 00:32:11.607 clat percentiles (usec): 00:32:11.607 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:32:11.607 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:32:11.607 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 251], 95.00th=[ 322], 00:32:11.607 | 99.00th=[ 412], 99.50th=[ 429], 99.90th=[ 490], 99.95th=[ 619], 00:32:11.607 | 99.99th=[ 2073] 00:32:11.607 bw ( KiB/s): min=13024, max=17672, per=26.39%, avg=16688.00, stdev=1824.58, samples=6 00:32:11.607 iops : min= 3256, max= 4418, avg=4172.00, stdev=456.14, samples=6 00:32:11.607 lat (usec) : 250=89.90%, 500=10.00%, 750=0.06%, 1000=0.01% 00:32:11.607 lat (msec) : 2=0.02%, 4=0.02% 00:32:11.607 cpu : usr=0.90%, sys=5.45%, ctx=12530, majf=0, minf=2 00:32:11.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.607 issued rwts: total=12526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:11.607 00:32:11.607 Run status group 0 (all jobs): 00:32:11.607 READ: bw=61.8MiB/s (64.8MB/s), 16.3MiB/s-18.6MiB/s (17.0MB/s-19.5MB/s), io=238MiB (250MB), run=3010-3853msec 00:32:11.607 00:32:11.607 Disk stats (read/write): 00:32:11.607 nvme0n1: ios=15968/0, merge=0/0, ticks=3146/0, in_queue=3146, util=95.14% 00:32:11.607 nvme0n2: ios=15925/0, merge=0/0, ticks=3325/0, in_queue=3325, util=95.45% 00:32:11.607 nvme0n3: ios=13723/0, merge=0/0, ticks=2910/0, in_queue=2910, util=96.30% 00:32:11.607 nvme0n4: ios=12096/0, merge=0/0, ticks=2698/0, in_queue=2698, util=96.76% 00:32:11.607 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:11.607 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:11.865 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:11.865 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:12.124 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:12.124 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:12.382 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:12.382 13:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:12.950 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:12.950 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 106308 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:13.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:13.209 nvmf hotplug test: fio failed as expected 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:13.209 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.487 rmmod nvme_tcp 00:32:13.487 rmmod nvme_fabrics 00:32:13.487 rmmod nvme_keyring 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 105828 ']' 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 105828 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105828 ']' 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105828 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105828 00:32:13.487 killing process with pid 105828 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105828' 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105828 00:32:13.487 13:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105828 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:32:13.747 00:32:13.747 real 0m20.329s 00:32:13.747 user 0m59.961s 00:32:13.747 sys 0m12.351s 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.747 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.747 ************************************ 00:32:13.747 END TEST nvmf_fio_target 00:32:13.747 ************************************ 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:14.007 ************************************ 00:32:14.007 START TEST nvmf_bdevio 00:32:14.007 ************************************ 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:14.007 * Looking for test storage... 00:32:14.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.007 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:14.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.008 --rc genhtml_branch_coverage=1 00:32:14.008 --rc genhtml_function_coverage=1 00:32:14.008 --rc genhtml_legend=1 00:32:14.008 --rc geninfo_all_blocks=1 00:32:14.008 --rc geninfo_unexecuted_blocks=1 00:32:14.008 00:32:14.008 ' 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:14.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.008 --rc genhtml_branch_coverage=1 00:32:14.008 --rc genhtml_function_coverage=1 00:32:14.008 --rc genhtml_legend=1 00:32:14.008 --rc geninfo_all_blocks=1 00:32:14.008 --rc geninfo_unexecuted_blocks=1 00:32:14.008 00:32:14.008 ' 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:14.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.008 --rc genhtml_branch_coverage=1 00:32:14.008 --rc genhtml_function_coverage=1 00:32:14.008 --rc genhtml_legend=1 00:32:14.008 --rc geninfo_all_blocks=1 00:32:14.008 --rc geninfo_unexecuted_blocks=1 00:32:14.008 00:32:14.008 ' 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:14.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.008 --rc genhtml_branch_coverage=1 00:32:14.008 --rc genhtml_function_coverage=1 00:32:14.008 --rc genhtml_legend=1 00:32:14.008 --rc geninfo_all_blocks=1 00:32:14.008 --rc geninfo_unexecuted_blocks=1 00:32:14.008 00:32:14.008 ' 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.008 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:14.009 Cannot find device "nvmf_init_br" 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:14.009 Cannot find device "nvmf_init_br2" 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:32:14.009 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:14.267 Cannot find device "nvmf_tgt_br" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:14.267 Cannot find device "nvmf_tgt_br2" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:14.267 Cannot find device "nvmf_init_br" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:14.267 Cannot find device "nvmf_init_br2" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:14.267 Cannot find device "nvmf_tgt_br" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:14.267 Cannot find device "nvmf_tgt_br2" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:14.267 Cannot find device "nvmf_br" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:14.267 Cannot find device "nvmf_init_if" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:14.267 Cannot find device "nvmf_init_if2" 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:32:14.267 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:14.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:14.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:14.268 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:14.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:14.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:32:14.527 00:32:14.527 --- 10.0.0.3 ping statistics --- 00:32:14.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.527 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:14.527 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:14.527 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:32:14.527 00:32:14.527 --- 10.0.0.4 ping statistics --- 00:32:14.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.527 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:14.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:32:14.527 00:32:14.527 --- 10.0.0.1 ping statistics --- 00:32:14.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.527 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:14.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:32:14.527 00:32:14.527 --- 10.0.0.2 ping statistics --- 00:32:14.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.527 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=106727 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 106727 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 106727 ']' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:14.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.527 13:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:14.527 [2024-12-06 13:26:20.981692] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:14.527 [2024-12-06 13:26:20.982977] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:14.527 [2024-12-06 13:26:20.983045] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.786 [2024-12-06 13:26:21.132465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:14.786 [2024-12-06 13:26:21.165819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.786 [2024-12-06 13:26:21.165873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.786 [2024-12-06 13:26:21.165884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.786 [2024-12-06 13:26:21.165893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.786 [2024-12-06 13:26:21.165900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.786 [2024-12-06 13:26:21.166629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:14.786 [2024-12-06 13:26:21.166759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:14.786 [2024-12-06 13:26:21.166878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:14.786 [2024-12-06 13:26:21.167300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:14.786 [2024-12-06 13:26:21.216721] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:14.786 [2024-12-06 13:26:21.216833] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:14.786 [2024-12-06 13:26:21.217202] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:14.786 [2024-12-06 13:26:21.217492] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:14.786 [2024-12-06 13:26:21.217738] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:14.786 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.786 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:14.786 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:14.787 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.787 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:14.787 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.787 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.787 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.787 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:14.787 [2024-12-06 13:26:21.296301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:15.046 Malloc0 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:15.046 [2024-12-06 13:26:21.360345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.046 { 00:32:15.046 "params": { 00:32:15.046 "name": "Nvme$subsystem", 00:32:15.046 "trtype": "$TEST_TRANSPORT", 00:32:15.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.046 "adrfam": "ipv4", 00:32:15.046 "trsvcid": "$NVMF_PORT", 00:32:15.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.046 "hdgst": ${hdgst:-false}, 00:32:15.046 "ddgst": ${ddgst:-false} 00:32:15.046 }, 00:32:15.046 "method": "bdev_nvme_attach_controller" 00:32:15.046 } 00:32:15.046 EOF 00:32:15.046 )") 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:15.046 13:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.046 "params": { 00:32:15.046 "name": "Nvme1", 00:32:15.046 "trtype": "tcp", 00:32:15.046 "traddr": "10.0.0.3", 00:32:15.046 "adrfam": "ipv4", 00:32:15.046 "trsvcid": "4420", 00:32:15.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.046 "hdgst": false, 00:32:15.046 "ddgst": false 00:32:15.046 }, 00:32:15.046 "method": "bdev_nvme_attach_controller" 00:32:15.046 }' 00:32:15.046 [2024-12-06 13:26:21.414888] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:15.046 [2024-12-06 13:26:21.414989] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106763 ] 00:32:15.046 [2024-12-06 13:26:21.565860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:15.305 [2024-12-06 13:26:21.610972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.305 [2024-12-06 13:26:21.611054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.305 [2024-12-06 13:26:21.611065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.305 I/O targets: 00:32:15.305 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:15.305 00:32:15.305 00:32:15.305 CUnit - A unit testing framework for C - Version 2.1-3 00:32:15.305 http://cunit.sourceforge.net/ 00:32:15.305 00:32:15.305 00:32:15.305 Suite: bdevio tests on: Nvme1n1 00:32:15.305 Test: blockdev write read block ...passed 00:32:15.563 Test: blockdev write zeroes read block ...passed 00:32:15.563 Test: blockdev write zeroes read no split ...passed 00:32:15.563 Test: blockdev write zeroes read split ...passed 00:32:15.563 Test: blockdev write zeroes read split partial ...passed 00:32:15.563 Test: blockdev reset ...[2024-12-06 13:26:21.860681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:15.563 [2024-12-06 13:26:21.860849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248cf70 (9): Bad file descriptor 00:32:15.563 passed 00:32:15.563 Test: blockdev write read 8 blocks ...[2024-12-06 13:26:21.865213] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:15.563 passed 00:32:15.563 Test: blockdev write read size > 128k ...passed 00:32:15.563 Test: blockdev write read invalid size ...passed 00:32:15.563 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:15.563 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:15.563 Test: blockdev write read max offset ...passed 00:32:15.563 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:15.563 Test: blockdev writev readv 8 blocks ...passed 00:32:15.563 Test: blockdev writev readv 30 x 1block ...passed 00:32:15.563 Test: blockdev writev readv block ...passed 00:32:15.563 Test: blockdev writev readv size > 128k ...passed 00:32:15.563 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:15.563 Test: blockdev comparev and writev ...[2024-12-06 13:26:22.037329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.563 [2024-12-06 13:26:22.037382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:15.563 [2024-12-06 13:26:22.037403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.563 [2024-12-06 13:26:22.037414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:15.563 [2024-12-06 13:26:22.037798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.563 [2024-12-06 13:26:22.037820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:15.563 [2024-12-06 13:26:22.037838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.563 [2024-12-06 13:26:22.037848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:15.563 [2024-12-06 13:26:22.038272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.563 [2024-12-06 13:26:22.038303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:15.563 [2024-12-06 13:26:22.038321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.564 [2024-12-06 13:26:22.038331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:15.564 [2024-12-06 13:26:22.038803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.564 [2024-12-06 13:26:22.038835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:15.564 [2024-12-06 13:26:22.038853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:15.564 [2024-12-06 13:26:22.038863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:15.564 passed 00:32:15.822 Test: blockdev nvme passthru rw ...passed 00:32:15.822 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:26:22.122334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:15.822 [2024-12-06 13:26:22.122373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:15.822 [2024-12-06 13:26:22.122522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:15.822 [2024-12-06 13:26:22.122547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:15.822 passed 00:32:15.822 Test: blockdev nvme admin passthru ...[2024-12-06 13:26:22.122695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:15.822 [2024-12-06 13:26:22.122729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:15.822 [2024-12-06 13:26:22.122879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:15.823 [2024-12-06 13:26:22.122900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:15.823 passed 00:32:15.823 Test: blockdev copy ...passed 00:32:15.823 00:32:15.823 Run Summary: Type Total Ran Passed Failed Inactive 00:32:15.823 suites 1 1 n/a 0 0 00:32:15.823 tests 23 23 23 0 0 00:32:15.823 asserts 152 152 152 0 n/a 00:32:15.823 00:32:15.823 Elapsed time = 0.854 seconds 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:15.823 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.081 rmmod nvme_tcp 00:32:16.081 rmmod nvme_fabrics 00:32:16.081 rmmod nvme_keyring 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 106727 ']' 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 106727 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 106727 ']' 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 106727 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106727 00:32:16.081 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:16.081 killing process with pid 106727 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106727' 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 106727 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 106727 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.082 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:32:16.340 00:32:16.340 real 0m2.549s 00:32:16.340 user 0m6.282s 00:32:16.340 sys 0m1.057s 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:16.340 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:16.340 ************************************ 00:32:16.340 END TEST nvmf_bdevio 00:32:16.340 ************************************ 00:32:16.600 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:16.600 00:32:16.600 real 3m33.249s 00:32:16.600 user 9m44.440s 00:32:16.600 sys 1m21.693s 00:32:16.600 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:16.600 13:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:16.600 ************************************ 00:32:16.600 END TEST nvmf_target_core_interrupt_mode 00:32:16.600 ************************************ 00:32:16.600 13:26:22 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:16.600 13:26:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:16.600 13:26:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:16.600 13:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:16.600 ************************************ 00:32:16.600 START TEST nvmf_interrupt 00:32:16.600 ************************************ 00:32:16.600 13:26:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:16.600 * Looking for test storage... 00:32:16.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.600 --rc genhtml_branch_coverage=1 00:32:16.600 --rc genhtml_function_coverage=1 00:32:16.600 --rc genhtml_legend=1 00:32:16.600 --rc geninfo_all_blocks=1 00:32:16.600 --rc geninfo_unexecuted_blocks=1 00:32:16.600 00:32:16.600 ' 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.600 --rc genhtml_branch_coverage=1 00:32:16.600 --rc genhtml_function_coverage=1 00:32:16.600 --rc genhtml_legend=1 00:32:16.600 --rc geninfo_all_blocks=1 00:32:16.600 --rc geninfo_unexecuted_blocks=1 00:32:16.600 00:32:16.600 ' 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.600 --rc genhtml_branch_coverage=1 00:32:16.600 --rc genhtml_function_coverage=1 00:32:16.600 --rc genhtml_legend=1 00:32:16.600 --rc geninfo_all_blocks=1 00:32:16.600 --rc geninfo_unexecuted_blocks=1 00:32:16.600 00:32:16.600 ' 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.600 --rc genhtml_branch_coverage=1 00:32:16.600 --rc genhtml_function_coverage=1 00:32:16.600 --rc genhtml_legend=1 00:32:16.600 --rc geninfo_all_blocks=1 00:32:16.600 --rc geninfo_unexecuted_blocks=1 00:32:16.600 00:32:16.600 ' 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.600 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:16.860 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:16.861 Cannot find device "nvmf_init_br" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:16.861 Cannot find device "nvmf_init_br2" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:16.861 Cannot find device "nvmf_tgt_br" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:16.861 Cannot find device "nvmf_tgt_br2" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:16.861 Cannot find device "nvmf_init_br" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:16.861 Cannot find device "nvmf_init_br2" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:16.861 Cannot find device "nvmf_tgt_br" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:16.861 Cannot find device "nvmf_tgt_br2" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:16.861 Cannot find device "nvmf_br" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:16.861 Cannot find device "nvmf_init_if" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:16.861 Cannot find device "nvmf_init_if2" 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:16.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:16.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:16.861 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:17.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:17.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:32:17.119 00:32:17.119 --- 10.0.0.3 ping statistics --- 00:32:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.119 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:17.119 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:17.119 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:32:17.119 00:32:17.119 --- 10.0.0.4 ping statistics --- 00:32:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.119 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:17.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:32:17.119 00:32:17.119 --- 10.0.0.1 ping statistics --- 00:32:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.119 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:17.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:32:17.119 00:32:17.119 --- 10.0.0.2 ping statistics --- 00:32:17.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.119 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:17.119 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=107011 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 107011 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 107011 ']' 00:32:17.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.120 13:26:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.377 [2024-12-06 13:26:23.656779] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.377 [2024-12-06 13:26:23.658153] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:17.377 [2024-12-06 13:26:23.658394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.377 [2024-12-06 13:26:23.810531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:17.377 [2024-12-06 13:26:23.850881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.377 [2024-12-06 13:26:23.851141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.377 [2024-12-06 13:26:23.851366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.377 [2024-12-06 13:26:23.851536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.377 [2024-12-06 13:26:23.851742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.377 [2024-12-06 13:26:23.852638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.377 [2024-12-06 13:26:23.852651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.635 [2024-12-06 13:26:23.911702] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:17.635 [2024-12-06 13:26:23.912089] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.635 [2024-12-06 13:26:23.912351] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:18.201 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.201 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:18.201 13:26:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:18.201 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.201 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:18.458 13:26:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.458 13:26:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:18.459 5000+0 records in 00:32:18.459 5000+0 records out 00:32:18.459 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0387413 s, 264 MB/s 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:18.459 AIO0 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:18.459 [2024-12-06 13:26:24.825768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:18.459 [2024-12-06 13:26:24.862165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107011 0 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107011 0 idle 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:18.459 13:26:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107011 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.23 reactor_0' 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107011 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.23 reactor_0 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107011 1 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107011 1 idle 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107016 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.00 reactor_1' 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107016 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.00 reactor_1 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.717 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=107086 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107011 0 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107011 0 busy 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:18.718 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107011 root 20 0 64.2g 44928 32640 S 6.7 0.4 0:00.24 reactor_0' 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107011 root 20 0 64.2g 44928 32640 S 6.7 0.4 0:00.24 reactor_0 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:18.976 13:26:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:19.909 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:19.909 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.909 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:19.909 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107011 root 20 0 64.2g 46208 33024 R 99.9 0.4 0:01.59 reactor_0' 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107011 root 20 0 64.2g 46208 33024 R 99.9 0.4 0:01.59 reactor_0 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:20.178 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107011 1 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107011 1 busy 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:20.179 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107016 root 20 0 64.2g 46208 33024 R 73.3 0.4 0:00.79 reactor_1' 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107016 root 20 0 64.2g 46208 33024 R 73.3 0.4 0:00.79 reactor_1 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:20.451 13:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 107086 00:32:30.429 Initializing NVMe Controllers 00:32:30.429 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:32:30.429 Controller IO queue size 256, less than required. 00:32:30.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:30.429 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:30.429 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:30.429 Initialization complete. Launching workers. 00:32:30.429 ======================================================== 00:32:30.429 Latency(us) 00:32:30.429 Device Information : IOPS MiB/s Average min max 00:32:30.429 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6822.60 26.65 37592.82 5903.66 74976.27 00:32:30.429 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6695.90 26.16 38289.95 6888.08 75622.95 00:32:30.429 ======================================================== 00:32:30.429 Total : 13518.50 52.81 37938.12 5903.66 75622.95 00:32:30.429 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107011 0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107011 0 idle 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107011 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:13.50 reactor_0' 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107011 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:13.50 reactor_0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107011 1 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107011 1 idle 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107016 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:06.63 reactor_1' 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107016 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:06.63 reactor_1 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:30.429 13:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107011 0 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107011 0 idle 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:31.806 13:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107011 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:13.56 reactor_0' 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107011 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:13.56 reactor_0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107011 1 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107011 1 idle 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107011 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107011 -w 256 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107016 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:06.63 reactor_1' 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107016 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:06.63 reactor_1 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.806 13:26:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:32.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.065 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.324 rmmod nvme_tcp 00:32:32.324 rmmod nvme_fabrics 00:32:32.324 rmmod nvme_keyring 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 107011 ']' 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 107011 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 107011 ']' 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 107011 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107011 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.324 killing process with pid 107011 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107011' 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 107011 00:32:32.324 13:26:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 107011 00:32:32.583 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.583 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.583 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.583 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.583 13:26:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:32.583 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:32:32.841 00:32:32.841 real 0m16.324s 00:32:32.841 user 0m28.028s 00:32:32.841 sys 0m7.471s 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.841 ************************************ 00:32:32.841 END TEST nvmf_interrupt 00:32:32.841 13:26:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 ************************************ 00:32:32.841 ************************************ 00:32:32.841 END TEST nvmf_tcp 00:32:32.841 ************************************ 00:32:32.841 00:32:32.841 real 20m30.284s 00:32:32.841 user 54m37.419s 00:32:32.841 sys 4m56.737s 00:32:32.841 13:26:39 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.841 13:26:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 13:26:39 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:32.841 13:26:39 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:32.841 13:26:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:32.841 13:26:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.841 13:26:39 -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 ************************************ 00:32:32.841 START TEST spdkcli_nvmf_tcp 00:32:32.841 ************************************ 00:32:32.841 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:33.101 * Looking for test storage... 00:32:33.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:33.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.101 --rc genhtml_branch_coverage=1 00:32:33.101 --rc genhtml_function_coverage=1 00:32:33.101 --rc genhtml_legend=1 00:32:33.101 --rc geninfo_all_blocks=1 00:32:33.101 --rc geninfo_unexecuted_blocks=1 00:32:33.101 00:32:33.101 ' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:33.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.101 --rc genhtml_branch_coverage=1 00:32:33.101 --rc genhtml_function_coverage=1 00:32:33.101 --rc genhtml_legend=1 00:32:33.101 --rc geninfo_all_blocks=1 00:32:33.101 --rc geninfo_unexecuted_blocks=1 00:32:33.101 00:32:33.101 ' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:33.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.101 --rc genhtml_branch_coverage=1 00:32:33.101 --rc genhtml_function_coverage=1 00:32:33.101 --rc genhtml_legend=1 00:32:33.101 --rc geninfo_all_blocks=1 00:32:33.101 --rc geninfo_unexecuted_blocks=1 00:32:33.101 00:32:33.101 ' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:33.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.101 --rc genhtml_branch_coverage=1 00:32:33.101 --rc genhtml_function_coverage=1 00:32:33.101 --rc genhtml_legend=1 00:32:33.101 --rc geninfo_all_blocks=1 00:32:33.101 --rc geninfo_unexecuted_blocks=1 00:32:33.101 00:32:33.101 ' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:33.101 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=107417 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 107417 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 107417 ']' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.101 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.101 [2024-12-06 13:26:39.611450] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:33.101 [2024-12-06 13:26:39.611560] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107417 ] 00:32:33.360 [2024-12-06 13:26:39.763199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:33.360 [2024-12-06 13:26:39.808981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.360 [2024-12-06 13:26:39.809019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.619 13:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:33.619 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:33.619 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:33.619 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:33.619 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:33.619 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:33.619 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:33.619 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:33.619 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:33.619 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:33.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:33.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:33.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:33.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:33.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:33.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:33.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:33.620 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:33.620 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:33.620 ' 00:32:36.921 [2024-12-06 13:26:42.757021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.856 [2024-12-06 13:26:44.082082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:40.383 [2024-12-06 13:26:46.531775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:42.281 [2024-12-06 13:26:48.649216] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:44.207 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:44.207 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:44.207 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:44.207 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:44.207 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:44.207 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:44.207 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:44.207 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:44.207 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:44.207 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:44.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:44.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:44.208 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:44.208 13:26:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:32:44.467 13:26:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:44.467 13:26:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:44.467 13:26:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:44.467 13:26:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.467 13:26:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:44.725 13:26:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:44.725 13:26:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.725 13:26:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:44.725 13:26:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:44.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:44.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:44.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:44.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:44.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:44.725 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:44.725 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:44.725 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:44.725 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:44.725 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:44.725 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:44.725 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:44.725 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:44.725 ' 00:32:49.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:49.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:49.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:49.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:49.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:49.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:49.995 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:49.995 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:49.995 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:49.995 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:49.995 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:49.995 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:49.995 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:49.995 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 107417 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 107417 ']' 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 107417 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107417 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:50.254 killing process with pid 107417 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107417' 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 107417 00:32:50.254 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 107417 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 107417 ']' 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 107417 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 107417 ']' 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 107417 00:32:50.514 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (107417) - No such process 00:32:50.514 Process with pid 107417 is not found 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 107417 is not found' 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:50.514 00:32:50.514 real 0m17.517s 00:32:50.514 user 0m38.376s 00:32:50.514 sys 0m0.796s 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.514 ************************************ 00:32:50.514 END TEST spdkcli_nvmf_tcp 00:32:50.514 ************************************ 00:32:50.514 13:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.514 13:26:56 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:50.514 13:26:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:50.514 13:26:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.514 13:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:50.514 ************************************ 00:32:50.514 START TEST nvmf_identify_passthru 00:32:50.514 ************************************ 00:32:50.514 13:26:56 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:50.514 * Looking for test storage... 00:32:50.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:50.514 13:26:56 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.514 13:26:56 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.514 13:26:56 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.775 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.775 13:26:57 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.775 13:26:57 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.775 13:26:57 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.775 13:26:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.775 13:26:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.775 13:26:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.775 13:26:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:50.776 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.776 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.776 --rc genhtml_branch_coverage=1 00:32:50.776 --rc genhtml_function_coverage=1 00:32:50.776 --rc genhtml_legend=1 00:32:50.776 --rc geninfo_all_blocks=1 00:32:50.776 --rc geninfo_unexecuted_blocks=1 00:32:50.776 00:32:50.776 ' 00:32:50.776 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.776 --rc genhtml_branch_coverage=1 00:32:50.776 --rc genhtml_function_coverage=1 00:32:50.776 --rc genhtml_legend=1 00:32:50.776 --rc geninfo_all_blocks=1 00:32:50.776 --rc geninfo_unexecuted_blocks=1 00:32:50.776 00:32:50.776 ' 00:32:50.776 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.776 --rc genhtml_branch_coverage=1 00:32:50.776 --rc genhtml_function_coverage=1 00:32:50.776 --rc genhtml_legend=1 00:32:50.776 --rc geninfo_all_blocks=1 00:32:50.776 --rc geninfo_unexecuted_blocks=1 00:32:50.776 00:32:50.776 ' 00:32:50.776 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.776 --rc genhtml_branch_coverage=1 00:32:50.776 --rc genhtml_function_coverage=1 00:32:50.776 --rc genhtml_legend=1 00:32:50.776 --rc geninfo_all_blocks=1 00:32:50.776 --rc geninfo_unexecuted_blocks=1 00:32:50.776 00:32:50.776 ' 00:32:50.776 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:50.776 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.776 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.776 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.776 13:26:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.776 13:26:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:50.777 13:26:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.777 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.777 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:50.777 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:50.777 Cannot find device "nvmf_init_br" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:50.777 Cannot find device "nvmf_init_br2" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:50.777 Cannot find device "nvmf_tgt_br" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:50.777 Cannot find device "nvmf_tgt_br2" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:50.777 Cannot find device "nvmf_init_br" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:50.777 Cannot find device "nvmf_init_br2" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:50.777 Cannot find device "nvmf_tgt_br" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:50.777 Cannot find device "nvmf_tgt_br2" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:50.777 Cannot find device "nvmf_br" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:50.777 Cannot find device "nvmf_init_if" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:50.777 Cannot find device "nvmf_init_if2" 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:50.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:50.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:50.777 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:51.036 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:51.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:51.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:32:51.037 00:32:51.037 --- 10.0.0.3 ping statistics --- 00:32:51.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.037 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:51.037 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:51.037 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:32:51.037 00:32:51.037 --- 10.0.0.4 ping statistics --- 00:32:51.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.037 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:51.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:32:51.037 00:32:51.037 --- 10.0.0.1 ping statistics --- 00:32:51.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.037 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:51.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:32:51.037 00:32:51.037 --- 10.0.0.2 ping statistics --- 00:32:51.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.037 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.037 13:26:57 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.037 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.037 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:51.037 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:51.296 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:32:51.296 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:32:51.296 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:32:51.296 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:32:51.296 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:32:51.296 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:32:51.296 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:51.297 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:51.297 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:32:51.297 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:32:51.297 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:51.297 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:51.556 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:32:51.556 13:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:51.556 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.556 13:26:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.557 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.557 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107922 00:32:51.557 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:51.557 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:51.557 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107922 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 107922 ']' 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.557 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.557 [2024-12-06 13:26:58.067391] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:51.557 [2024-12-06 13:26:58.067478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.836 [2024-12-06 13:26:58.215717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:51.836 [2024-12-06 13:26:58.252765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.836 [2024-12-06 13:26:58.252825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.836 [2024-12-06 13:26:58.252837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.836 [2024-12-06 13:26:58.252845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.836 [2024-12-06 13:26:58.252852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.836 [2024-12-06 13:26:58.253763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.836 [2024-12-06 13:26:58.253828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.836 [2024-12-06 13:26:58.253912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.836 [2024-12-06 13:26:58.253918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.836 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.836 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:51.836 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:51.836 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.836 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.096 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.096 [2024-12-06 13:26:58.409676] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.096 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.096 [2024-12-06 13:26:58.419272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.096 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.096 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:32:52.096 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.097 Nvme0n1 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.097 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.097 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.097 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.097 [2024-12-06 13:26:58.563146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.097 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.097 [ 00:32:52.097 { 00:32:52.097 "allow_any_host": true, 00:32:52.097 "hosts": [], 00:32:52.097 "listen_addresses": [], 00:32:52.097 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:52.097 "subtype": "Discovery" 00:32:52.097 }, 00:32:52.097 { 00:32:52.097 "allow_any_host": true, 00:32:52.097 "hosts": [], 00:32:52.097 "listen_addresses": [ 00:32:52.097 { 00:32:52.097 "adrfam": "IPv4", 00:32:52.097 "traddr": "10.0.0.3", 00:32:52.097 "trsvcid": "4420", 00:32:52.097 "trtype": "TCP" 00:32:52.097 } 00:32:52.097 ], 00:32:52.097 "max_cntlid": 65519, 00:32:52.097 "max_namespaces": 1, 00:32:52.097 "min_cntlid": 1, 00:32:52.097 "model_number": "SPDK bdev Controller", 00:32:52.097 "namespaces": [ 00:32:52.097 { 00:32:52.097 "bdev_name": "Nvme0n1", 00:32:52.097 "name": "Nvme0n1", 00:32:52.097 "nguid": "44CAEFA1C86A4E2F9F966D5F4D2555AB", 00:32:52.097 "nsid": 1, 00:32:52.097 "uuid": "44caefa1-c86a-4e2f-9f96-6d5f4d2555ab" 00:32:52.097 } 00:32:52.097 ], 00:32:52.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:52.097 "serial_number": "SPDK00000000000001", 00:32:52.097 "subtype": "NVMe" 00:32:52.097 } 00:32:52.097 ] 00:32:52.097 13:26:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.097 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:52.097 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:52.097 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:52.355 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:32:52.355 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:52.355 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:52.355 13:26:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:52.614 13:26:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:32:52.614 13:26:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:32:52.614 13:26:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:32:52.614 13:26:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:52.614 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.614 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.614 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.614 13:26:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:52.614 13:26:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:52.614 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:52.614 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:52.872 rmmod nvme_tcp 00:32:52.872 rmmod nvme_fabrics 00:32:52.872 rmmod nvme_keyring 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 107922 ']' 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 107922 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 107922 ']' 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 107922 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107922 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:52.872 killing process with pid 107922 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107922' 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 107922 00:32:52.872 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 107922 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:52.872 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:53.130 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:53.130 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.131 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:53.131 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.131 13:26:59 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:32:53.131 00:32:53.131 real 0m2.721s 00:32:53.131 user 0m5.065s 00:32:53.131 sys 0m0.833s 00:32:53.131 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.131 13:26:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.131 ************************************ 00:32:53.131 END TEST nvmf_identify_passthru 00:32:53.131 ************************************ 00:32:53.390 13:26:59 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:32:53.390 13:26:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:53.390 13:26:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.390 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:32:53.390 ************************************ 00:32:53.390 START TEST nvmf_dif 00:32:53.390 ************************************ 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:32:53.390 * Looking for test storage... 00:32:53.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:53.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.390 --rc genhtml_branch_coverage=1 00:32:53.390 --rc genhtml_function_coverage=1 00:32:53.390 --rc genhtml_legend=1 00:32:53.390 --rc geninfo_all_blocks=1 00:32:53.390 --rc geninfo_unexecuted_blocks=1 00:32:53.390 00:32:53.390 ' 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:53.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.390 --rc genhtml_branch_coverage=1 00:32:53.390 --rc genhtml_function_coverage=1 00:32:53.390 --rc genhtml_legend=1 00:32:53.390 --rc geninfo_all_blocks=1 00:32:53.390 --rc geninfo_unexecuted_blocks=1 00:32:53.390 00:32:53.390 ' 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:53.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.390 --rc genhtml_branch_coverage=1 00:32:53.390 --rc genhtml_function_coverage=1 00:32:53.390 --rc genhtml_legend=1 00:32:53.390 --rc geninfo_all_blocks=1 00:32:53.390 --rc geninfo_unexecuted_blocks=1 00:32:53.390 00:32:53.390 ' 00:32:53.390 13:26:59 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:53.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.390 --rc genhtml_branch_coverage=1 00:32:53.390 --rc genhtml_function_coverage=1 00:32:53.390 --rc genhtml_legend=1 00:32:53.390 --rc geninfo_all_blocks=1 00:32:53.390 --rc geninfo_unexecuted_blocks=1 00:32:53.390 00:32:53.390 ' 00:32:53.390 13:26:59 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.390 13:26:59 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.390 13:26:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.390 13:26:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.390 13:26:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.390 13:26:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.391 13:26:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:53.391 13:26:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:53.391 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.391 13:26:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:53.391 13:26:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:53.391 13:26:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:53.391 13:26:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:53.391 13:26:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.391 13:26:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:53.391 13:26:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:53.391 Cannot find device "nvmf_init_br" 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@162 -- # true 00:32:53.391 13:26:59 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:53.649 Cannot find device "nvmf_init_br2" 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@163 -- # true 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:53.649 Cannot find device "nvmf_tgt_br" 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@164 -- # true 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:53.649 Cannot find device "nvmf_tgt_br2" 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@165 -- # true 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:53.649 Cannot find device "nvmf_init_br" 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@166 -- # true 00:32:53.649 13:26:59 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:53.649 Cannot find device "nvmf_init_br2" 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@167 -- # true 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:53.650 Cannot find device "nvmf_tgt_br" 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@168 -- # true 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:53.650 Cannot find device "nvmf_tgt_br2" 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@169 -- # true 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:53.650 Cannot find device "nvmf_br" 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@170 -- # true 00:32:53.650 13:26:59 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:53.650 Cannot find device "nvmf_init_if" 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@171 -- # true 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:53.650 Cannot find device "nvmf_init_if2" 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@172 -- # true 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:53.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@173 -- # true 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:53.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@174 -- # true 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:53.650 13:27:00 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:53.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:53.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:32:53.909 00:32:53.909 --- 10.0.0.3 ping statistics --- 00:32:53.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.909 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:53.909 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:53.909 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:32:53.909 00:32:53.909 --- 10.0.0.4 ping statistics --- 00:32:53.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.909 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:53.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:32:53.909 00:32:53.909 --- 10.0.0.1 ping statistics --- 00:32:53.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.909 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:53.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:32:53.909 00:32:53.909 --- 10.0.0.2 ping statistics --- 00:32:53.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.909 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:53.909 13:27:00 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:54.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:54.168 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:54.168 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:54.168 13:27:00 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:54.168 13:27:00 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:54.168 13:27:00 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:54.168 13:27:00 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:54.168 13:27:00 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:54.168 13:27:00 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:54.427 13:27:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:54.427 13:27:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:54.427 13:27:00 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:54.427 13:27:00 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=108311 00:32:54.427 13:27:00 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:54.427 13:27:00 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 108311 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 108311 ']' 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.427 13:27:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:54.427 [2024-12-06 13:27:00.775169] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:54.427 [2024-12-06 13:27:00.775831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.427 [2024-12-06 13:27:00.929184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.687 [2024-12-06 13:27:00.968101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.687 [2024-12-06 13:27:00.968158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.687 [2024-12-06 13:27:00.968170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.687 [2024-12-06 13:27:00.968180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.687 [2024-12-06 13:27:00.968188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.687 [2024-12-06 13:27:00.968566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:54.687 13:27:01 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:54.687 13:27:01 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.687 13:27:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:54.687 13:27:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:54.687 [2024-12-06 13:27:01.107469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.687 13:27:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.687 13:27:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:54.687 ************************************ 00:32:54.687 START TEST fio_dif_1_default 00:32:54.687 ************************************ 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:54.687 bdev_null0 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:54.687 [2024-12-06 13:27:01.155636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:54.687 { 00:32:54.687 "params": { 00:32:54.687 "name": "Nvme$subsystem", 00:32:54.687 "trtype": "$TEST_TRANSPORT", 00:32:54.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:54.687 "adrfam": "ipv4", 00:32:54.687 "trsvcid": "$NVMF_PORT", 00:32:54.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:54.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:54.687 "hdgst": ${hdgst:-false}, 00:32:54.687 "ddgst": ${ddgst:-false} 00:32:54.687 }, 00:32:54.687 "method": "bdev_nvme_attach_controller" 00:32:54.687 } 00:32:54.687 EOF 00:32:54.687 )") 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:54.687 "params": { 00:32:54.687 "name": "Nvme0", 00:32:54.687 "trtype": "tcp", 00:32:54.687 "traddr": "10.0.0.3", 00:32:54.687 "adrfam": "ipv4", 00:32:54.687 "trsvcid": "4420", 00:32:54.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:54.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:54.687 "hdgst": false, 00:32:54.687 "ddgst": false 00:32:54.687 }, 00:32:54.687 "method": "bdev_nvme_attach_controller" 00:32:54.687 }' 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:54.687 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:54.946 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:54.946 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:54.946 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:54.946 13:27:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:54.946 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:54.946 fio-3.35 00:32:54.946 Starting 1 thread 00:33:07.151 00:33:07.151 filename0: (groupid=0, jobs=1): err= 0: pid=108378: Fri Dec 6 13:27:11 2024 00:33:07.151 read: IOPS=223, BW=896KiB/s (917kB/s)(8976KiB/10021msec) 00:33:07.151 slat (nsec): min=5555, max=57480, avg=10436.37, stdev=5991.79 00:33:07.151 clat (usec): min=457, max=41524, avg=17828.31, stdev=20032.80 00:33:07.151 lat (usec): min=465, max=41534, avg=17838.75, stdev=20033.09 00:33:07.151 clat percentiles (usec): 00:33:07.151 | 1.00th=[ 469], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 523], 00:33:07.151 | 30.00th=[ 562], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[40633], 00:33:07.151 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:07.151 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:07.151 | 99.99th=[41681] 00:33:07.151 bw ( KiB/s): min= 416, max= 4480, per=99.92%, avg=895.90, stdev=856.89, samples=20 00:33:07.151 iops : min= 104, max= 1120, avg=223.95, stdev=214.22, samples=20 00:33:07.151 lat (usec) : 500=10.52%, 750=46.66%, 1000=0.04% 00:33:07.151 lat (msec) : 4=0.18%, 50=42.60% 00:33:07.151 cpu : usr=92.49%, sys=7.00%, ctx=23, majf=0, minf=9 00:33:07.151 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.151 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.151 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:07.151 00:33:07.151 Run status group 0 (all jobs): 00:33:07.151 READ: bw=896KiB/s (917kB/s), 896KiB/s-896KiB/s (917kB/s-917kB/s), io=8976KiB (9191kB), run=10021-10021msec 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 00:33:07.151 real 0m11.000s 00:33:07.151 user 0m9.915s 00:33:07.151 sys 0m0.946s 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 ************************************ 00:33:07.151 END TEST fio_dif_1_default 00:33:07.151 ************************************ 00:33:07.151 13:27:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:07.151 13:27:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:07.151 13:27:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 ************************************ 00:33:07.151 START TEST fio_dif_1_multi_subsystems 00:33:07.151 ************************************ 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 bdev_null0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 [2024-12-06 13:27:12.202720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 bdev_null1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.151 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:07.152 { 00:33:07.152 "params": { 00:33:07.152 "name": "Nvme$subsystem", 00:33:07.152 "trtype": "$TEST_TRANSPORT", 00:33:07.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.152 "adrfam": "ipv4", 00:33:07.152 "trsvcid": "$NVMF_PORT", 00:33:07.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.152 "hdgst": ${hdgst:-false}, 00:33:07.152 "ddgst": ${ddgst:-false} 00:33:07.152 }, 00:33:07.152 "method": "bdev_nvme_attach_controller" 00:33:07.152 } 00:33:07.152 EOF 00:33:07.152 )") 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:07.152 { 00:33:07.152 "params": { 00:33:07.152 "name": "Nvme$subsystem", 00:33:07.152 "trtype": "$TEST_TRANSPORT", 00:33:07.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.152 "adrfam": "ipv4", 00:33:07.152 "trsvcid": "$NVMF_PORT", 00:33:07.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.152 "hdgst": ${hdgst:-false}, 00:33:07.152 "ddgst": ${ddgst:-false} 00:33:07.152 }, 00:33:07.152 "method": "bdev_nvme_attach_controller" 00:33:07.152 } 00:33:07.152 EOF 00:33:07.152 )") 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:07.152 "params": { 00:33:07.152 "name": "Nvme0", 00:33:07.152 "trtype": "tcp", 00:33:07.152 "traddr": "10.0.0.3", 00:33:07.152 "adrfam": "ipv4", 00:33:07.152 "trsvcid": "4420", 00:33:07.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:07.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:07.152 "hdgst": false, 00:33:07.152 "ddgst": false 00:33:07.152 }, 00:33:07.152 "method": "bdev_nvme_attach_controller" 00:33:07.152 },{ 00:33:07.152 "params": { 00:33:07.152 "name": "Nvme1", 00:33:07.152 "trtype": "tcp", 00:33:07.152 "traddr": "10.0.0.3", 00:33:07.152 "adrfam": "ipv4", 00:33:07.152 "trsvcid": "4420", 00:33:07.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:07.152 "hdgst": false, 00:33:07.152 "ddgst": false 00:33:07.152 }, 00:33:07.152 "method": "bdev_nvme_attach_controller" 00:33:07.152 }' 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:07.152 13:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:07.152 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:07.152 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:07.152 fio-3.35 00:33:07.152 Starting 2 threads 00:33:17.203 00:33:17.204 filename0: (groupid=0, jobs=1): err= 0: pid=108538: Fri Dec 6 13:27:23 2024 00:33:17.204 read: IOPS=183, BW=734KiB/s (751kB/s)(7360KiB/10032msec) 00:33:17.204 slat (nsec): min=7704, max=72962, avg=13146.42, stdev=9429.24 00:33:17.204 clat (usec): min=436, max=42835, avg=21764.20, stdev=20220.12 00:33:17.204 lat (usec): min=444, max=42868, avg=21777.34, stdev=20220.10 00:33:17.204 clat percentiles (usec): 00:33:17.204 | 1.00th=[ 474], 5.00th=[ 498], 10.00th=[ 523], 20.00th=[ 578], 00:33:17.204 | 30.00th=[ 635], 40.00th=[ 709], 50.00th=[40633], 60.00th=[41157], 00:33:17.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:17.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:17.204 | 99.99th=[42730] 00:33:17.204 bw ( KiB/s): min= 416, max= 1984, per=52.90%, avg=734.40, stdev=365.66, samples=20 00:33:17.204 iops : min= 104, max= 496, avg=183.60, stdev=91.42, samples=20 00:33:17.204 lat (usec) : 500=5.05%, 750=37.45%, 1000=3.04% 00:33:17.204 lat (msec) : 2=2.07%, 10=0.22%, 50=52.17% 00:33:17.204 cpu : usr=95.60%, sys=3.81%, ctx=19, majf=0, minf=0 00:33:17.204 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.204 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.204 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:17.204 filename1: (groupid=0, jobs=1): err= 0: pid=108539: Fri Dec 6 13:27:23 2024 00:33:17.204 read: IOPS=163, BW=654KiB/s (670kB/s)(6560KiB/10032msec) 00:33:17.204 slat (usec): min=7, max=101, avg=12.98, stdev= 9.89 00:33:17.204 clat (usec): min=447, max=42824, avg=24424.40, stdev=19899.85 00:33:17.204 lat (usec): min=455, max=42857, avg=24437.37, stdev=19900.29 00:33:17.204 clat percentiles (usec): 00:33:17.204 | 1.00th=[ 482], 5.00th=[ 523], 10.00th=[ 553], 20.00th=[ 594], 00:33:17.204 | 30.00th=[ 676], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41157], 00:33:17.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:17.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:17.204 | 99.99th=[42730] 00:33:17.204 bw ( KiB/s): min= 384, max= 2336, per=47.13%, avg=654.40, stdev=408.02, samples=20 00:33:17.204 iops : min= 96, max= 584, avg=163.60, stdev=102.00, samples=20 00:33:17.204 lat (usec) : 500=2.44%, 750=33.29%, 1000=3.17% 00:33:17.204 lat (msec) : 2=2.07%, 10=0.24%, 50=58.78% 00:33:17.204 cpu : usr=95.29%, sys=4.17%, ctx=23, majf=0, minf=9 00:33:17.204 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.204 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.204 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:17.204 00:33:17.204 Run status group 0 (all jobs): 00:33:17.204 READ: bw=1388KiB/s (1421kB/s), 654KiB/s-734KiB/s (670kB/s-751kB/s), io=13.6MiB (14.3MB), run=10032-10032msec 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 00:33:17.204 real 0m11.167s 00:33:17.204 user 0m19.929s 00:33:17.204 sys 0m1.060s 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 ************************************ 00:33:17.204 END TEST fio_dif_1_multi_subsystems 00:33:17.204 ************************************ 00:33:17.204 13:27:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:17.204 13:27:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:17.204 13:27:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 ************************************ 00:33:17.204 START TEST fio_dif_rand_params 00:33:17.204 ************************************ 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 bdev_null0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.204 [2024-12-06 13:27:23.430227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:17.204 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.205 { 00:33:17.205 "params": { 00:33:17.205 "name": "Nvme$subsystem", 00:33:17.205 "trtype": "$TEST_TRANSPORT", 00:33:17.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.205 "adrfam": "ipv4", 00:33:17.205 "trsvcid": "$NVMF_PORT", 00:33:17.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.205 "hdgst": ${hdgst:-false}, 00:33:17.205 "ddgst": ${ddgst:-false} 00:33:17.205 }, 00:33:17.205 "method": "bdev_nvme_attach_controller" 00:33:17.205 } 00:33:17.205 EOF 00:33:17.205 )") 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:17.205 "params": { 00:33:17.205 "name": "Nvme0", 00:33:17.205 "trtype": "tcp", 00:33:17.205 "traddr": "10.0.0.3", 00:33:17.205 "adrfam": "ipv4", 00:33:17.205 "trsvcid": "4420", 00:33:17.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.205 "hdgst": false, 00:33:17.205 "ddgst": false 00:33:17.205 }, 00:33:17.205 "method": "bdev_nvme_attach_controller" 00:33:17.205 }' 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:17.205 13:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.205 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:17.205 ... 00:33:17.205 fio-3.35 00:33:17.205 Starting 3 threads 00:33:23.793 00:33:23.793 filename0: (groupid=0, jobs=1): err= 0: pid=108685: Fri Dec 6 13:27:29 2024 00:33:23.793 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(139MiB/5006msec) 00:33:23.793 slat (usec): min=7, max=118, avg=14.05, stdev= 5.96 00:33:23.793 clat (usec): min=6023, max=54664, avg=13519.29, stdev=10127.24 00:33:23.793 lat (usec): min=6035, max=54679, avg=13533.34, stdev=10127.14 00:33:23.793 clat percentiles (usec): 00:33:23.793 | 1.00th=[ 6849], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 9896], 00:33:23.793 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:33:23.793 | 70.00th=[11994], 80.00th=[12256], 90.00th=[13042], 95.00th=[50070], 00:33:23.793 | 99.00th=[53740], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:33:23.793 | 99.99th=[54789] 00:33:23.793 bw ( KiB/s): min=23552, max=35584, per=32.97%, avg=28313.60, stdev=4329.69, samples=10 00:33:23.793 iops : min= 184, max= 278, avg=221.20, stdev=33.83, samples=10 00:33:23.793 lat (msec) : 10=21.82%, 20=71.69%, 50=1.71%, 100=4.78% 00:33:23.793 cpu : usr=92.21%, sys=6.05%, ctx=17, majf=0, minf=0 00:33:23.793 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.793 issued rwts: total=1109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.793 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:23.793 filename0: (groupid=0, jobs=1): err= 0: pid=108686: Fri Dec 6 13:27:29 2024 00:33:23.793 read: IOPS=232, BW=29.0MiB/s (30.4MB/s)(145MiB/5002msec) 00:33:23.793 slat (nsec): min=4874, max=59438, avg=12872.71, stdev=6113.32 00:33:23.793 clat (usec): min=4459, max=51507, avg=12890.23, stdev=4775.56 00:33:23.793 lat (usec): min=4467, max=51526, avg=12903.11, stdev=4775.70 00:33:23.793 clat percentiles (usec): 00:33:23.793 | 1.00th=[ 4490], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 9503], 00:33:23.793 | 30.00th=[10028], 40.00th=[11731], 50.00th=[14615], 60.00th=[15270], 00:33:23.793 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:33:23.793 | 99.00th=[18220], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:33:23.793 | 99.99th=[51643] 00:33:23.793 bw ( KiB/s): min=24576, max=38400, per=34.27%, avg=29434.11, stdev=4800.79, samples=9 00:33:23.793 iops : min= 192, max= 300, avg=229.89, stdev=37.56, samples=9 00:33:23.793 lat (msec) : 10=28.51%, 20=70.97%, 50=0.17%, 100=0.34% 00:33:23.793 cpu : usr=91.74%, sys=6.58%, ctx=7, majf=0, minf=0 00:33:23.793 IO depths : 1=31.2%, 2=68.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.793 issued rwts: total=1161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.793 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:23.793 filename0: (groupid=0, jobs=1): err= 0: pid=108687: Fri Dec 6 13:27:29 2024 00:33:23.793 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(136MiB/5005msec) 00:33:23.793 slat (nsec): min=5201, max=45518, avg=13683.41, stdev=4800.25 00:33:23.793 clat (usec): min=4483, max=55941, avg=13760.71, stdev=8663.60 00:33:23.793 lat (usec): min=4496, max=55956, avg=13774.39, stdev=8663.58 00:33:23.793 clat percentiles (usec): 00:33:23.793 | 1.00th=[ 4686], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[ 8848], 00:33:23.793 | 30.00th=[11600], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 00:33:23.793 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14877], 95.00th=[16319], 00:33:23.793 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:33:23.793 | 99.99th=[55837] 00:33:23.793 bw ( KiB/s): min=24576, max=35072, per=32.40%, avg=27827.20, stdev=3202.10, samples=10 00:33:23.793 iops : min= 192, max= 274, avg=217.40, stdev=25.02, samples=10 00:33:23.793 lat (msec) : 10=24.89%, 20=70.71%, 50=1.01%, 100=3.40% 00:33:23.793 cpu : usr=92.75%, sys=5.80%, ctx=9, majf=0, minf=0 00:33:23.793 IO depths : 1=7.1%, 2=92.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.793 issued rwts: total=1089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.793 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:23.793 00:33:23.793 Run status group 0 (all jobs): 00:33:23.793 READ: bw=83.9MiB/s (87.9MB/s), 27.2MiB/s-29.0MiB/s (28.5MB/s-30.4MB/s), io=420MiB (440MB), run=5002-5006msec 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.793 bdev_null0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.793 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 [2024-12-06 13:27:29.450490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 bdev_null1 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 bdev_null2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.794 { 00:33:23.794 "params": { 00:33:23.794 "name": "Nvme$subsystem", 00:33:23.794 "trtype": "$TEST_TRANSPORT", 00:33:23.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.794 "adrfam": "ipv4", 00:33:23.794 "trsvcid": "$NVMF_PORT", 00:33:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.794 "hdgst": ${hdgst:-false}, 00:33:23.794 "ddgst": ${ddgst:-false} 00:33:23.794 }, 00:33:23.794 "method": "bdev_nvme_attach_controller" 00:33:23.794 } 00:33:23.794 EOF 00:33:23.794 )") 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.794 { 00:33:23.794 "params": { 00:33:23.794 "name": "Nvme$subsystem", 00:33:23.794 "trtype": "$TEST_TRANSPORT", 00:33:23.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.794 "adrfam": "ipv4", 00:33:23.794 "trsvcid": "$NVMF_PORT", 00:33:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.794 "hdgst": ${hdgst:-false}, 00:33:23.794 "ddgst": ${ddgst:-false} 00:33:23.794 }, 00:33:23.794 "method": "bdev_nvme_attach_controller" 00:33:23.794 } 00:33:23.794 EOF 00:33:23.794 )") 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.794 { 00:33:23.794 "params": { 00:33:23.794 "name": "Nvme$subsystem", 00:33:23.794 "trtype": "$TEST_TRANSPORT", 00:33:23.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.794 "adrfam": "ipv4", 00:33:23.794 "trsvcid": "$NVMF_PORT", 00:33:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.794 "hdgst": ${hdgst:-false}, 00:33:23.794 "ddgst": ${ddgst:-false} 00:33:23.794 }, 00:33:23.794 "method": "bdev_nvme_attach_controller" 00:33:23.794 } 00:33:23.794 EOF 00:33:23.794 )") 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:23.794 13:27:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:23.794 "params": { 00:33:23.794 "name": "Nvme0", 00:33:23.794 "trtype": "tcp", 00:33:23.794 "traddr": "10.0.0.3", 00:33:23.794 "adrfam": "ipv4", 00:33:23.794 "trsvcid": "4420", 00:33:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.794 "hdgst": false, 00:33:23.794 "ddgst": false 00:33:23.794 }, 00:33:23.794 "method": "bdev_nvme_attach_controller" 00:33:23.794 },{ 00:33:23.794 "params": { 00:33:23.794 "name": "Nvme1", 00:33:23.794 "trtype": "tcp", 00:33:23.794 "traddr": "10.0.0.3", 00:33:23.794 "adrfam": "ipv4", 00:33:23.794 "trsvcid": "4420", 00:33:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.794 "hdgst": false, 00:33:23.794 "ddgst": false 00:33:23.794 }, 00:33:23.794 "method": "bdev_nvme_attach_controller" 00:33:23.795 },{ 00:33:23.795 "params": { 00:33:23.795 "name": "Nvme2", 00:33:23.795 "trtype": "tcp", 00:33:23.795 "traddr": "10.0.0.3", 00:33:23.795 "adrfam": "ipv4", 00:33:23.795 "trsvcid": "4420", 00:33:23.795 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:23.795 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:23.795 "hdgst": false, 00:33:23.795 "ddgst": false 00:33:23.795 }, 00:33:23.795 "method": "bdev_nvme_attach_controller" 00:33:23.795 }' 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:23.795 13:27:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.795 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:23.795 ... 00:33:23.795 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:23.795 ... 00:33:23.795 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:23.795 ... 00:33:23.795 fio-3.35 00:33:23.795 Starting 24 threads 00:33:36.058 00:33:36.058 filename0: (groupid=0, jobs=1): err= 0: pid=108786: Fri Dec 6 13:27:40 2024 00:33:36.058 read: IOPS=179, BW=718KiB/s (735kB/s)(7200KiB/10030msec) 00:33:36.058 slat (usec): min=3, max=8034, avg=30.45, stdev=377.52 00:33:36.058 clat (msec): min=23, max=203, avg=88.91, stdev=25.59 00:33:36.058 lat (msec): min=24, max=203, avg=88.94, stdev=25.59 00:33:36.058 clat percentiles (msec): 00:33:36.058 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:33:36.058 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 96], 00:33:36.058 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 133], 00:33:36.058 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 205], 99.95th=[ 205], 00:33:36.058 | 99.99th=[ 205] 00:33:36.058 bw ( KiB/s): min= 512, max= 1048, per=3.65%, avg=712.85, stdev=140.24, samples=20 00:33:36.058 iops : min= 128, max= 262, avg=178.20, stdev=35.06, samples=20 00:33:36.058 lat (msec) : 50=6.61%, 100=67.67%, 250=25.72% 00:33:36.058 cpu : usr=32.27%, sys=0.77%, ctx=859, majf=0, minf=9 00:33:36.058 IO depths : 1=2.4%, 2=5.1%, 4=13.4%, 8=68.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:33:36.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.058 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.058 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.058 filename0: (groupid=0, jobs=1): err= 0: pid=108787: Fri Dec 6 13:27:40 2024 00:33:36.058 read: IOPS=221, BW=888KiB/s (909kB/s)(8952KiB/10082msec) 00:33:36.058 slat (usec): min=7, max=8022, avg=24.03, stdev=295.06 00:33:36.058 clat (msec): min=4, max=152, avg=71.78, stdev=27.72 00:33:36.058 lat (msec): min=4, max=152, avg=71.81, stdev=27.73 00:33:36.058 clat percentiles (msec): 00:33:36.058 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 40], 20.00th=[ 51], 00:33:36.058 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 80], 00:33:36.058 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:33:36.058 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:33:36.058 | 99.99th=[ 153] 00:33:36.058 bw ( KiB/s): min= 640, max= 2285, per=4.55%, avg=887.90, stdev=343.16, samples=20 00:33:36.058 iops : min= 160, max= 571, avg=221.95, stdev=85.74, samples=20 00:33:36.058 lat (msec) : 10=2.14%, 20=3.75%, 50=13.72%, 100=66.49%, 250=13.90% 00:33:36.058 cpu : usr=38.29%, sys=1.13%, ctx=990, majf=0, minf=9 00:33:36.058 IO depths : 1=1.4%, 2=3.0%, 4=10.6%, 8=72.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:36.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.058 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.058 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.058 filename0: (groupid=0, jobs=1): err= 0: pid=108788: Fri Dec 6 13:27:40 2024 00:33:36.058 read: IOPS=207, BW=830KiB/s (850kB/s)(8340KiB/10047msec) 00:33:36.058 slat (usec): min=3, max=8034, avg=22.75, stdev=252.13 00:33:36.058 clat (msec): min=20, max=185, avg=76.94, stdev=25.60 00:33:36.058 lat (msec): min=20, max=185, avg=76.97, stdev=25.61 00:33:36.058 clat percentiles (msec): 00:33:36.058 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 58], 00:33:36.058 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 82], 00:33:36.058 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 121], 00:33:36.058 | 99.00th=[ 171], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 186], 00:33:36.058 | 99.99th=[ 186] 00:33:36.058 bw ( KiB/s): min= 560, max= 1272, per=4.24%, avg=826.95, stdev=161.74, samples=20 00:33:36.058 iops : min= 140, max= 318, avg=206.70, stdev=40.44, samples=20 00:33:36.058 lat (msec) : 50=14.58%, 100=71.65%, 250=13.76% 00:33:36.058 cpu : usr=35.51%, sys=0.95%, ctx=1084, majf=0, minf=9 00:33:36.058 IO depths : 1=0.9%, 2=2.1%, 4=7.9%, 8=75.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:33:36.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.058 complete : 0=0.0%, 4=89.6%, 8=6.6%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.058 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.058 filename0: (groupid=0, jobs=1): err= 0: pid=108789: Fri Dec 6 13:27:40 2024 00:33:36.058 read: IOPS=177, BW=711KiB/s (728kB/s)(7124KiB/10018msec) 00:33:36.058 slat (usec): min=3, max=8025, avg=20.85, stdev=268.48 00:33:36.058 clat (msec): min=37, max=191, avg=89.77, stdev=25.42 00:33:36.058 lat (msec): min=37, max=191, avg=89.79, stdev=25.42 00:33:36.058 clat percentiles (msec): 00:33:36.058 | 1.00th=[ 39], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 72], 00:33:36.058 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 96], 00:33:36.058 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 133], 00:33:36.058 | 99.00th=[ 167], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:33:36.058 | 99.99th=[ 192] 00:33:36.058 bw ( KiB/s): min= 512, max= 888, per=3.62%, avg=705.60, stdev=103.26, samples=20 00:33:36.058 iops : min= 128, max= 222, avg=176.40, stdev=25.81, samples=20 00:33:36.058 lat (msec) : 50=4.88%, 100=64.74%, 250=30.38% 00:33:36.058 cpu : usr=32.89%, sys=0.92%, ctx=904, majf=0, minf=9 00:33:36.059 IO depths : 1=2.9%, 2=6.2%, 4=16.6%, 8=64.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:33:36.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 issued rwts: total=1781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.059 filename0: (groupid=0, jobs=1): err= 0: pid=108790: Fri Dec 6 13:27:40 2024 00:33:36.059 read: IOPS=179, BW=719KiB/s (736kB/s)(7216KiB/10034msec) 00:33:36.059 slat (usec): min=4, max=4019, avg=14.21, stdev=94.46 00:33:36.059 clat (msec): min=31, max=182, avg=88.84, stdev=26.42 00:33:36.059 lat (msec): min=31, max=182, avg=88.86, stdev=26.42 00:33:36.059 clat percentiles (msec): 00:33:36.059 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:33:36.059 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 94], 00:33:36.059 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 144], 00:33:36.059 | 99.00th=[ 155], 99.50th=[ 180], 99.90th=[ 184], 99.95th=[ 184], 00:33:36.059 | 99.99th=[ 184] 00:33:36.059 bw ( KiB/s): min= 512, max= 984, per=3.66%, avg=714.00, stdev=112.95, samples=20 00:33:36.059 iops : min= 128, max= 246, avg=178.50, stdev=28.24, samples=20 00:33:36.059 lat (msec) : 50=6.37%, 100=69.29%, 250=24.33% 00:33:36.059 cpu : usr=33.77%, sys=0.98%, ctx=971, majf=0, minf=9 00:33:36.059 IO depths : 1=2.9%, 2=6.5%, 4=17.2%, 8=63.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:33:36.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 issued rwts: total=1804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.059 filename0: (groupid=0, jobs=1): err= 0: pid=108791: Fri Dec 6 13:27:40 2024 00:33:36.059 read: IOPS=227, BW=908KiB/s (930kB/s)(9144KiB/10067msec) 00:33:36.059 slat (usec): min=6, max=5125, avg=22.97, stdev=218.03 00:33:36.059 clat (msec): min=15, max=170, avg=70.18, stdev=24.42 00:33:36.059 lat (msec): min=15, max=170, avg=70.20, stdev=24.41 00:33:36.059 clat percentiles (msec): 00:33:36.059 | 1.00th=[ 19], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 51], 00:33:36.059 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:33:36.059 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 121], 00:33:36.059 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 171], 99.95th=[ 171], 00:33:36.059 | 99.99th=[ 171] 00:33:36.059 bw ( KiB/s): min= 688, max= 1328, per=4.66%, avg=908.00, stdev=153.28, samples=20 00:33:36.059 iops : min= 172, max= 332, avg=227.00, stdev=38.32, samples=20 00:33:36.059 lat (msec) : 20=1.18%, 50=18.77%, 100=67.89%, 250=12.16% 00:33:36.059 cpu : usr=42.59%, sys=1.36%, ctx=1287, majf=0, minf=9 00:33:36.059 IO depths : 1=1.0%, 2=2.1%, 4=9.1%, 8=75.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:33:36.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.059 filename0: (groupid=0, jobs=1): err= 0: pid=108792: Fri Dec 6 13:27:40 2024 00:33:36.059 read: IOPS=202, BW=808KiB/s (828kB/s)(8136KiB/10067msec) 00:33:36.059 slat (usec): min=3, max=8063, avg=24.23, stdev=308.62 00:33:36.059 clat (msec): min=14, max=179, avg=78.85, stdev=26.77 00:33:36.059 lat (msec): min=14, max=179, avg=78.88, stdev=26.77 00:33:36.059 clat percentiles (msec): 00:33:36.059 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:33:36.059 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:33:36.059 | 70.00th=[ 88], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 132], 00:33:36.059 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 180], 99.95th=[ 180], 00:33:36.059 | 99.99th=[ 180] 00:33:36.059 bw ( KiB/s): min= 512, max= 1282, per=4.15%, avg=809.30, stdev=189.15, samples=20 00:33:36.059 iops : min= 128, max= 320, avg=202.30, stdev=47.22, samples=20 00:33:36.059 lat (msec) : 20=0.69%, 50=14.55%, 100=64.70%, 250=20.06% 00:33:36.059 cpu : usr=32.08%, sys=0.99%, ctx=882, majf=0, minf=9 00:33:36.059 IO depths : 1=1.7%, 2=4.2%, 4=14.0%, 8=68.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:33:36.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.059 filename0: (groupid=0, jobs=1): err= 0: pid=108793: Fri Dec 6 13:27:40 2024 00:33:36.059 read: IOPS=215, BW=863KiB/s (884kB/s)(8684KiB/10058msec) 00:33:36.059 slat (usec): min=3, max=4055, avg=17.67, stdev=149.86 00:33:36.059 clat (msec): min=23, max=187, avg=73.94, stdev=22.81 00:33:36.059 lat (msec): min=23, max=187, avg=73.96, stdev=22.81 00:33:36.059 clat percentiles (msec): 00:33:36.059 | 1.00th=[ 26], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 56], 00:33:36.059 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:33:36.059 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 112], 00:33:36.059 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 188], 00:33:36.059 | 99.99th=[ 188] 00:33:36.059 bw ( KiB/s): min= 634, max= 1154, per=4.42%, avg=861.80, stdev=148.13, samples=20 00:33:36.059 iops : min= 158, max= 288, avg=215.40, stdev=37.02, samples=20 00:33:36.059 lat (msec) : 50=12.76%, 100=74.62%, 250=12.62% 00:33:36.059 cpu : usr=39.99%, sys=1.10%, ctx=1442, majf=0, minf=9 00:33:36.059 IO depths : 1=1.8%, 2=3.9%, 4=11.4%, 8=71.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:33:36.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.059 filename1: (groupid=0, jobs=1): err= 0: pid=108794: Fri Dec 6 13:27:40 2024 00:33:36.059 read: IOPS=228, BW=915KiB/s (937kB/s)(9188KiB/10044msec) 00:33:36.059 slat (usec): min=3, max=8061, avg=18.70, stdev=237.28 00:33:36.059 clat (msec): min=7, max=179, avg=69.74, stdev=26.04 00:33:36.059 lat (msec): min=7, max=179, avg=69.76, stdev=26.04 00:33:36.059 clat percentiles (msec): 00:33:36.059 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 49], 00:33:36.059 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:33:36.059 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 123], 00:33:36.059 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 180], 99.95th=[ 180], 00:33:36.059 | 99.99th=[ 180] 00:33:36.059 bw ( KiB/s): min= 512, max= 1408, per=4.68%, avg=912.45, stdev=205.76, samples=20 00:33:36.059 iops : min= 128, max= 352, avg=228.10, stdev=51.44, samples=20 00:33:36.059 lat (msec) : 10=1.39%, 20=0.70%, 50=22.77%, 100=63.65%, 250=11.49% 00:33:36.059 cpu : usr=37.27%, sys=0.95%, ctx=1084, majf=0, minf=9 00:33:36.059 IO depths : 1=0.8%, 2=2.0%, 4=9.9%, 8=74.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:33:36.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.059 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.059 filename1: (groupid=0, jobs=1): err= 0: pid=108795: Fri Dec 6 13:27:40 2024 00:33:36.059 read: IOPS=213, BW=856KiB/s (876kB/s)(8616KiB/10067msec) 00:33:36.059 slat (usec): min=6, max=12032, avg=25.75, stdev=314.64 00:33:36.059 clat (msec): min=13, max=144, avg=74.52, stdev=26.85 00:33:36.059 lat (msec): min=13, max=144, avg=74.54, stdev=26.85 00:33:36.059 clat percentiles (msec): 00:33:36.060 | 1.00th=[ 15], 5.00th=[ 29], 10.00th=[ 43], 20.00th=[ 53], 00:33:36.060 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 83], 00:33:36.060 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 122], 00:33:36.060 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:33:36.060 | 99.99th=[ 146] 00:33:36.060 bw ( KiB/s): min= 640, max= 1664, per=4.39%, avg=855.20, stdev=234.85, samples=20 00:33:36.060 iops : min= 160, max= 416, avg=213.80, stdev=58.71, samples=20 00:33:36.060 lat (msec) : 20=2.23%, 50=16.02%, 100=67.04%, 250=14.72% 00:33:36.060 cpu : usr=36.33%, sys=1.22%, ctx=1178, majf=0, minf=9 00:33:36.060 IO depths : 1=1.7%, 2=4.0%, 4=12.9%, 8=69.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:33:36.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.060 filename1: (groupid=0, jobs=1): err= 0: pid=108796: Fri Dec 6 13:27:40 2024 00:33:36.060 read: IOPS=192, BW=768KiB/s (787kB/s)(7728KiB/10060msec) 00:33:36.060 slat (usec): min=3, max=4026, avg=14.00, stdev=91.48 00:33:36.060 clat (msec): min=17, max=179, avg=83.19, stdev=24.22 00:33:36.060 lat (msec): min=17, max=179, avg=83.21, stdev=24.23 00:33:36.060 clat percentiles (msec): 00:33:36.060 | 1.00th=[ 24], 5.00th=[ 45], 10.00th=[ 56], 20.00th=[ 71], 00:33:36.060 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:33:36.060 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 129], 00:33:36.060 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:33:36.060 | 99.99th=[ 180] 00:33:36.060 bw ( KiB/s): min= 512, max= 1280, per=3.93%, avg=766.25, stdev=148.39, samples=20 00:33:36.060 iops : min= 128, max= 320, avg=191.55, stdev=37.10, samples=20 00:33:36.060 lat (msec) : 20=0.83%, 50=6.68%, 100=75.41%, 250=17.08% 00:33:36.060 cpu : usr=37.81%, sys=1.32%, ctx=1130, majf=0, minf=9 00:33:36.060 IO depths : 1=3.2%, 2=6.9%, 4=17.0%, 8=63.5%, 16=9.4%, 32=0.0%, >=64=0.0% 00:33:36.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.060 filename1: (groupid=0, jobs=1): err= 0: pid=108797: Fri Dec 6 13:27:40 2024 00:33:36.060 read: IOPS=246, BW=988KiB/s (1011kB/s)(9888KiB/10011msec) 00:33:36.060 slat (usec): min=8, max=5312, avg=19.35, stdev=171.04 00:33:36.060 clat (msec): min=6, max=161, avg=64.65, stdev=25.80 00:33:36.060 lat (msec): min=6, max=161, avg=64.67, stdev=25.81 00:33:36.060 clat percentiles (msec): 00:33:36.060 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 42], 20.00th=[ 48], 00:33:36.060 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 67], 00:33:36.060 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 109], 00:33:36.060 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 163], 00:33:36.060 | 99.99th=[ 163] 00:33:36.060 bw ( KiB/s): min= 512, max= 2171, per=5.04%, avg=982.25, stdev=323.40, samples=20 00:33:36.060 iops : min= 128, max= 542, avg=245.50, stdev=80.70, samples=20 00:33:36.060 lat (msec) : 10=2.02%, 20=3.16%, 50=23.22%, 100=62.22%, 250=9.39% 00:33:36.060 cpu : usr=42.75%, sys=1.28%, ctx=1456, majf=0, minf=9 00:33:36.060 IO depths : 1=1.5%, 2=3.2%, 4=11.2%, 8=72.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:33:36.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 issued rwts: total=2472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.060 filename1: (groupid=0, jobs=1): err= 0: pid=108798: Fri Dec 6 13:27:40 2024 00:33:36.060 read: IOPS=197, BW=790KiB/s (809kB/s)(7944KiB/10058msec) 00:33:36.060 slat (usec): min=5, max=8035, avg=16.83, stdev=180.17 00:33:36.060 clat (msec): min=31, max=165, avg=80.70, stdev=24.25 00:33:36.060 lat (msec): min=31, max=165, avg=80.72, stdev=24.26 00:33:36.060 clat percentiles (msec): 00:33:36.060 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 61], 00:33:36.060 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:33:36.060 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 121], 00:33:36.060 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 165], 00:33:36.060 | 99.99th=[ 165] 00:33:36.060 bw ( KiB/s): min= 560, max= 1152, per=4.06%, avg=791.30, stdev=147.98, samples=20 00:33:36.060 iops : min= 140, max= 288, avg=197.80, stdev=37.01, samples=20 00:33:36.060 lat (msec) : 50=11.13%, 100=70.90%, 250=17.98% 00:33:36.060 cpu : usr=32.10%, sys=0.89%, ctx=865, majf=0, minf=9 00:33:36.060 IO depths : 1=0.8%, 2=1.7%, 4=9.5%, 8=74.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:33:36.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.060 filename1: (groupid=0, jobs=1): err= 0: pid=108799: Fri Dec 6 13:27:40 2024 00:33:36.060 read: IOPS=184, BW=737KiB/s (754kB/s)(7384KiB/10025msec) 00:33:36.060 slat (usec): min=4, max=8040, avg=16.37, stdev=186.92 00:33:36.060 clat (msec): min=35, max=177, avg=86.69, stdev=21.46 00:33:36.060 lat (msec): min=35, max=177, avg=86.71, stdev=21.46 00:33:36.060 clat percentiles (msec): 00:33:36.060 | 1.00th=[ 36], 5.00th=[ 54], 10.00th=[ 64], 20.00th=[ 72], 00:33:36.060 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 89], 00:33:36.060 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 127], 00:33:36.060 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 178], 99.95th=[ 178], 00:33:36.060 | 99.99th=[ 178] 00:33:36.060 bw ( KiB/s): min= 560, max= 896, per=3.75%, avg=731.45, stdev=96.69, samples=20 00:33:36.060 iops : min= 140, max= 224, avg=182.80, stdev=24.19, samples=20 00:33:36.060 lat (msec) : 50=4.28%, 100=74.49%, 250=21.24% 00:33:36.060 cpu : usr=39.99%, sys=1.39%, ctx=1131, majf=0, minf=9 00:33:36.060 IO depths : 1=3.0%, 2=6.8%, 4=17.7%, 8=62.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:33:36.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.060 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.060 filename1: (groupid=0, jobs=1): err= 0: pid=108800: Fri Dec 6 13:27:40 2024 00:33:36.060 read: IOPS=210, BW=844KiB/s (864kB/s)(8488KiB/10060msec) 00:33:36.060 slat (usec): min=4, max=8042, avg=22.90, stdev=301.38 00:33:36.060 clat (msec): min=23, max=143, avg=75.60, stdev=23.93 00:33:36.060 lat (msec): min=23, max=143, avg=75.62, stdev=23.93 00:33:36.060 clat percentiles (msec): 00:33:36.060 | 1.00th=[ 25], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 53], 00:33:36.060 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:33:36.060 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 00:33:36.060 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:33:36.060 | 99.99th=[ 144] 00:33:36.060 bw ( KiB/s): min= 640, max= 1208, per=4.32%, avg=842.05, stdev=164.58, samples=20 00:33:36.060 iops : min= 160, max= 302, avg=210.50, stdev=41.15, samples=20 00:33:36.060 lat (msec) : 50=19.60%, 100=67.53%, 250=12.87% 00:33:36.060 cpu : usr=38.85%, sys=0.98%, ctx=837, majf=0, minf=9 00:33:36.061 IO depths : 1=2.0%, 2=4.1%, 4=10.7%, 8=71.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:36.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 complete : 0=0.0%, 4=90.5%, 8=5.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 issued rwts: total=2122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.061 filename1: (groupid=0, jobs=1): err= 0: pid=108801: Fri Dec 6 13:27:40 2024 00:33:36.061 read: IOPS=177, BW=712KiB/s (729kB/s)(7116KiB/10001msec) 00:33:36.061 slat (usec): min=4, max=8042, avg=21.82, stdev=268.94 00:33:36.061 clat (msec): min=4, max=203, avg=89.79, stdev=25.20 00:33:36.061 lat (msec): min=4, max=203, avg=89.81, stdev=25.21 00:33:36.061 clat percentiles (msec): 00:33:36.061 | 1.00th=[ 32], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 72], 00:33:36.061 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 95], 00:33:36.061 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 132], 00:33:36.061 | 99.00th=[ 148], 99.50th=[ 180], 99.90th=[ 205], 99.95th=[ 205], 00:33:36.061 | 99.99th=[ 205] 00:33:36.061 bw ( KiB/s): min= 384, max= 896, per=3.60%, avg=701.89, stdev=123.10, samples=19 00:33:36.061 iops : min= 96, max= 224, avg=175.47, stdev=30.77, samples=19 00:33:36.061 lat (msec) : 10=0.90%, 50=3.32%, 100=69.31%, 250=26.48% 00:33:36.061 cpu : usr=33.71%, sys=1.05%, ctx=953, majf=0, minf=9 00:33:36.061 IO depths : 1=2.9%, 2=6.4%, 4=17.0%, 8=63.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:33:36.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.061 filename2: (groupid=0, jobs=1): err= 0: pid=108802: Fri Dec 6 13:27:40 2024 00:33:36.061 read: IOPS=183, BW=734KiB/s (752kB/s)(7340KiB/10001msec) 00:33:36.061 slat (usec): min=3, max=4022, avg=13.64, stdev=93.78 00:33:36.061 clat (usec): min=1516, max=177954, avg=87123.38, stdev=29384.81 00:33:36.061 lat (usec): min=1524, max=177983, avg=87137.02, stdev=29384.66 00:33:36.061 clat percentiles (usec): 00:33:36.061 | 1.00th=[ 1942], 5.00th=[ 47973], 10.00th=[ 56886], 20.00th=[ 71828], 00:33:36.061 | 30.00th=[ 73925], 40.00th=[ 80217], 50.00th=[ 84411], 60.00th=[ 90702], 00:33:36.061 | 70.00th=[ 95945], 80.00th=[107480], 90.00th=[120062], 95.00th=[135267], 00:33:36.061 | 99.00th=[173016], 99.50th=[177210], 99.90th=[177210], 99.95th=[177210], 00:33:36.061 | 99.99th=[177210] 00:33:36.061 bw ( KiB/s): min= 512, max= 936, per=3.61%, avg=704.42, stdev=127.26, samples=19 00:33:36.061 iops : min= 128, max= 234, avg=176.11, stdev=31.82, samples=19 00:33:36.061 lat (msec) : 2=1.25%, 4=1.36%, 10=0.60%, 50=3.76%, 100=65.34% 00:33:36.061 lat (msec) : 250=27.68% 00:33:36.061 cpu : usr=37.85%, sys=1.16%, ctx=1237, majf=0, minf=9 00:33:36.061 IO depths : 1=1.4%, 2=3.3%, 4=11.7%, 8=71.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:33:36.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 complete : 0=0.0%, 4=90.5%, 8=5.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.061 filename2: (groupid=0, jobs=1): err= 0: pid=108803: Fri Dec 6 13:27:40 2024 00:33:36.061 read: IOPS=187, BW=751KiB/s (769kB/s)(7516KiB/10009msec) 00:33:36.061 slat (usec): min=4, max=8053, avg=27.08, stdev=331.95 00:33:36.061 clat (msec): min=32, max=179, avg=85.02, stdev=23.97 00:33:36.061 lat (msec): min=32, max=179, avg=85.05, stdev=23.96 00:33:36.061 clat percentiles (msec): 00:33:36.061 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 70], 00:33:36.061 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 87], 00:33:36.061 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 132], 00:33:36.061 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 180], 00:33:36.061 | 99.99th=[ 180] 00:33:36.061 bw ( KiB/s): min= 592, max= 1000, per=3.85%, avg=750.42, stdev=106.00, samples=19 00:33:36.061 iops : min= 148, max= 250, avg=187.58, stdev=26.51, samples=19 00:33:36.061 lat (msec) : 50=7.13%, 100=70.14%, 250=22.72% 00:33:36.061 cpu : usr=33.35%, sys=0.86%, ctx=964, majf=0, minf=9 00:33:36.061 IO depths : 1=2.3%, 2=5.2%, 4=14.4%, 8=67.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:33:36.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 issued rwts: total=1879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.061 filename2: (groupid=0, jobs=1): err= 0: pid=108804: Fri Dec 6 13:27:40 2024 00:33:36.061 read: IOPS=240, BW=960KiB/s (983kB/s)(9668KiB/10068msec) 00:33:36.061 slat (usec): min=4, max=4334, avg=17.75, stdev=151.90 00:33:36.061 clat (msec): min=13, max=156, avg=66.37, stdev=22.93 00:33:36.061 lat (msec): min=13, max=156, avg=66.39, stdev=22.93 00:33:36.061 clat percentiles (msec): 00:33:36.061 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 48], 00:33:36.061 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 70], 00:33:36.061 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:33:36.061 | 99.00th=[ 133], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:33:36.061 | 99.99th=[ 157] 00:33:36.061 bw ( KiB/s): min= 608, max= 1539, per=4.92%, avg=960.55, stdev=195.09, samples=20 00:33:36.061 iops : min= 152, max= 384, avg=240.10, stdev=48.66, samples=20 00:33:36.061 lat (msec) : 20=1.99%, 50=20.52%, 100=69.05%, 250=8.44% 00:33:36.061 cpu : usr=40.77%, sys=1.21%, ctx=1554, majf=0, minf=9 00:33:36.061 IO depths : 1=0.3%, 2=0.7%, 4=5.6%, 8=79.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:33:36.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 complete : 0=0.0%, 4=89.2%, 8=7.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.061 filename2: (groupid=0, jobs=1): err= 0: pid=108805: Fri Dec 6 13:27:40 2024 00:33:36.061 read: IOPS=185, BW=741KiB/s (759kB/s)(7412KiB/10004msec) 00:33:36.061 slat (usec): min=3, max=8024, avg=22.56, stdev=279.07 00:33:36.061 clat (msec): min=4, max=167, avg=86.21, stdev=24.47 00:33:36.061 lat (msec): min=4, max=168, avg=86.24, stdev=24.46 00:33:36.061 clat percentiles (msec): 00:33:36.061 | 1.00th=[ 25], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 71], 00:33:36.061 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 87], 00:33:36.061 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 132], 00:33:36.061 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:33:36.061 | 99.99th=[ 169] 00:33:36.061 bw ( KiB/s): min= 512, max= 1024, per=3.74%, avg=730.53, stdev=111.21, samples=19 00:33:36.061 iops : min= 128, max= 256, avg=182.63, stdev=27.80, samples=19 00:33:36.061 lat (msec) : 10=0.86%, 50=2.59%, 100=76.42%, 250=20.13% 00:33:36.061 cpu : usr=37.58%, sys=1.34%, ctx=1249, majf=0, minf=9 00:33:36.061 IO depths : 1=3.0%, 2=6.6%, 4=17.0%, 8=63.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:33:36.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.061 issued rwts: total=1853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.061 filename2: (groupid=0, jobs=1): err= 0: pid=108806: Fri Dec 6 13:27:40 2024 00:33:36.061 read: IOPS=222, BW=890KiB/s (911kB/s)(8960KiB/10067msec) 00:33:36.061 slat (usec): min=6, max=4029, avg=14.81, stdev=85.21 00:33:36.061 clat (msec): min=16, max=183, avg=71.65, stdev=26.22 00:33:36.061 lat (msec): min=17, max=184, avg=71.66, stdev=26.22 00:33:36.061 clat percentiles (msec): 00:33:36.061 | 1.00th=[ 23], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 00:33:36.061 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:33:36.061 | 70.00th=[ 80], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 127], 00:33:36.061 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 184], 00:33:36.061 | 99.99th=[ 184] 00:33:36.062 bw ( KiB/s): min= 608, max= 1280, per=4.56%, avg=889.60, stdev=191.25, samples=20 00:33:36.062 iops : min= 152, max= 320, avg=222.40, stdev=47.81, samples=20 00:33:36.062 lat (msec) : 20=0.31%, 50=17.90%, 100=68.79%, 250=12.99% 00:33:36.062 cpu : usr=43.06%, sys=1.26%, ctx=1229, majf=0, minf=9 00:33:36.062 IO depths : 1=0.7%, 2=1.7%, 4=8.1%, 8=76.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:33:36.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.062 filename2: (groupid=0, jobs=1): err= 0: pid=108807: Fri Dec 6 13:27:40 2024 00:33:36.062 read: IOPS=182, BW=729KiB/s (746kB/s)(7292KiB/10009msec) 00:33:36.062 slat (nsec): min=3930, max=65891, avg=12473.04, stdev=5368.31 00:33:36.062 clat (msec): min=35, max=193, avg=87.66, stdev=23.60 00:33:36.062 lat (msec): min=35, max=193, avg=87.67, stdev=23.60 00:33:36.062 clat percentiles (msec): 00:33:36.062 | 1.00th=[ 42], 5.00th=[ 56], 10.00th=[ 65], 20.00th=[ 72], 00:33:36.062 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 90], 00:33:36.062 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 132], 00:33:36.062 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 194], 99.95th=[ 194], 00:33:36.062 | 99.99th=[ 194] 00:33:36.062 bw ( KiB/s): min= 384, max= 896, per=3.70%, avg=722.95, stdev=116.91, samples=19 00:33:36.062 iops : min= 96, max= 224, avg=180.74, stdev=29.23, samples=19 00:33:36.062 lat (msec) : 50=4.17%, 100=73.94%, 250=21.89% 00:33:36.062 cpu : usr=43.28%, sys=1.42%, ctx=1227, majf=0, minf=10 00:33:36.062 IO depths : 1=2.2%, 2=5.0%, 4=15.2%, 8=67.0%, 16=10.6%, 32=0.0%, >=64=0.0% 00:33:36.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 complete : 0=0.0%, 4=91.2%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 issued rwts: total=1823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.062 filename2: (groupid=0, jobs=1): err= 0: pid=108808: Fri Dec 6 13:27:40 2024 00:33:36.062 read: IOPS=212, BW=850KiB/s (870kB/s)(8512KiB/10017msec) 00:33:36.062 slat (usec): min=3, max=8038, avg=23.64, stdev=232.76 00:33:36.062 clat (msec): min=33, max=192, avg=75.15, stdev=24.84 00:33:36.062 lat (msec): min=33, max=192, avg=75.17, stdev=24.83 00:33:36.062 clat percentiles (msec): 00:33:36.062 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:33:36.062 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 75], 00:33:36.062 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 109], 95.00th=[ 129], 00:33:36.062 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:33:36.062 | 99.99th=[ 192] 00:33:36.062 bw ( KiB/s): min= 640, max= 1152, per=4.33%, avg=844.60, stdev=142.84, samples=20 00:33:36.062 iops : min= 160, max= 288, avg=211.15, stdev=35.71, samples=20 00:33:36.062 lat (msec) : 50=13.77%, 100=71.90%, 250=14.33% 00:33:36.062 cpu : usr=38.07%, sys=1.19%, ctx=1207, majf=0, minf=9 00:33:36.062 IO depths : 1=1.0%, 2=2.2%, 4=8.8%, 8=75.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:33:36.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 complete : 0=0.0%, 4=89.6%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 issued rwts: total=2128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.062 filename2: (groupid=0, jobs=1): err= 0: pid=108809: Fri Dec 6 13:27:40 2024 00:33:36.062 read: IOPS=216, BW=866KiB/s (887kB/s)(8700KiB/10042msec) 00:33:36.062 slat (usec): min=4, max=8066, avg=17.07, stdev=189.25 00:33:36.062 clat (msec): min=14, max=165, avg=73.68, stdev=24.09 00:33:36.062 lat (msec): min=14, max=165, avg=73.70, stdev=24.09 00:33:36.062 clat percentiles (msec): 00:33:36.062 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 56], 00:33:36.062 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 78], 00:33:36.062 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 120], 00:33:36.062 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 165], 00:33:36.062 | 99.99th=[ 165] 00:33:36.062 bw ( KiB/s): min= 640, max= 1376, per=4.43%, avg=863.25, stdev=184.65, samples=20 00:33:36.062 iops : min= 160, max= 344, avg=215.80, stdev=46.15, samples=20 00:33:36.062 lat (msec) : 20=0.32%, 50=13.06%, 100=75.03%, 250=11.59% 00:33:36.062 cpu : usr=39.25%, sys=1.34%, ctx=1237, majf=0, minf=9 00:33:36.062 IO depths : 1=1.1%, 2=2.4%, 4=9.1%, 8=74.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:33:36.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.062 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:36.062 00:33:36.062 Run status group 0 (all jobs): 00:33:36.062 READ: bw=19.0MiB/s (20.0MB/s), 711KiB/s-988KiB/s (728kB/s-1011kB/s), io=192MiB (201MB), run=10001-10082msec 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:36.062 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 bdev_null0 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 [2024-12-06 13:27:40.847705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 bdev_null1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.063 { 00:33:36.063 "params": { 00:33:36.063 "name": "Nvme$subsystem", 00:33:36.063 "trtype": "$TEST_TRANSPORT", 00:33:36.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.063 "adrfam": "ipv4", 00:33:36.063 "trsvcid": "$NVMF_PORT", 00:33:36.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.063 "hdgst": ${hdgst:-false}, 00:33:36.063 "ddgst": ${ddgst:-false} 00:33:36.063 }, 00:33:36.063 "method": "bdev_nvme_attach_controller" 00:33:36.063 } 00:33:36.063 EOF 00:33:36.063 )") 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.063 { 00:33:36.063 "params": { 00:33:36.063 "name": "Nvme$subsystem", 00:33:36.063 "trtype": "$TEST_TRANSPORT", 00:33:36.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.063 "adrfam": "ipv4", 00:33:36.063 "trsvcid": "$NVMF_PORT", 00:33:36.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.063 "hdgst": ${hdgst:-false}, 00:33:36.063 "ddgst": ${ddgst:-false} 00:33:36.063 }, 00:33:36.063 "method": "bdev_nvme_attach_controller" 00:33:36.063 } 00:33:36.063 EOF 00:33:36.063 )") 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.063 "params": { 00:33:36.063 "name": "Nvme0", 00:33:36.063 "trtype": "tcp", 00:33:36.063 "traddr": "10.0.0.3", 00:33:36.063 "adrfam": "ipv4", 00:33:36.063 "trsvcid": "4420", 00:33:36.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.063 "hdgst": false, 00:33:36.063 "ddgst": false 00:33:36.063 }, 00:33:36.063 "method": "bdev_nvme_attach_controller" 00:33:36.063 },{ 00:33:36.063 "params": { 00:33:36.063 "name": "Nvme1", 00:33:36.063 "trtype": "tcp", 00:33:36.063 "traddr": "10.0.0.3", 00:33:36.063 "adrfam": "ipv4", 00:33:36.063 "trsvcid": "4420", 00:33:36.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.063 "hdgst": false, 00:33:36.063 "ddgst": false 00:33:36.063 }, 00:33:36.063 "method": "bdev_nvme_attach_controller" 00:33:36.063 }' 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:36.063 13:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.063 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:36.063 ... 00:33:36.063 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:36.063 ... 00:33:36.063 fio-3.35 00:33:36.063 Starting 4 threads 00:33:40.285 00:33:40.285 filename0: (groupid=0, jobs=1): err= 0: pid=108926: Fri Dec 6 13:27:46 2024 00:33:40.285 read: IOPS=1844, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5002msec) 00:33:40.285 slat (nsec): min=4696, max=69249, avg=12588.09, stdev=5157.23 00:33:40.285 clat (usec): min=1170, max=9999, avg=4277.06, stdev=455.35 00:33:40.285 lat (usec): min=1180, max=10015, avg=4289.64, stdev=455.16 00:33:40.285 clat percentiles (usec): 00:33:40.285 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4113], 00:33:40.285 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:40.285 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5211], 00:33:40.285 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 7111], 99.95th=[10028], 00:33:40.285 | 99.99th=[10028] 00:33:40.285 bw ( KiB/s): min=12288, max=15792, per=24.98%, avg=14711.11, stdev=982.25, samples=9 00:33:40.285 iops : min= 1536, max= 1974, avg=1838.89, stdev=122.78, samples=9 00:33:40.285 lat (msec) : 2=0.52%, 4=0.51%, 10=98.97% 00:33:40.285 cpu : usr=93.86%, sys=4.82%, ctx=106, majf=0, minf=0 00:33:40.285 IO depths : 1=12.1%, 2=24.9%, 4=50.1%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.285 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.285 issued rwts: total=9227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.285 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:40.285 filename0: (groupid=0, jobs=1): err= 0: pid=108927: Fri Dec 6 13:27:46 2024 00:33:40.285 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.8MiB/5002msec) 00:33:40.285 slat (usec): min=3, max=103, avg=16.37, stdev= 4.38 00:33:40.285 clat (usec): min=3069, max=9986, avg=4270.55, stdev=412.21 00:33:40.285 lat (usec): min=3091, max=10001, avg=4286.92, stdev=412.05 00:33:40.285 clat percentiles (usec): 00:33:40.285 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:33:40.285 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4178], 00:33:40.286 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 5211], 00:33:40.286 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 7898], 99.95th=[ 9896], 00:33:40.286 | 99.99th=[10028] 00:33:40.286 bw ( KiB/s): min=12288, max=15232, per=24.88%, avg=14654.22, stdev=937.89, samples=9 00:33:40.286 iops : min= 1536, max= 1904, avg=1831.78, stdev=117.24, samples=9 00:33:40.286 lat (msec) : 4=0.60%, 10=99.40% 00:33:40.286 cpu : usr=94.52%, sys=4.20%, ctx=70, majf=0, minf=0 00:33:40.286 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.286 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.286 issued rwts: total=9195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.286 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:40.286 filename1: (groupid=0, jobs=1): err= 0: pid=108928: Fri Dec 6 13:27:46 2024 00:33:40.286 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:33:40.286 slat (nsec): min=4740, max=52692, avg=15702.72, stdev=4297.40 00:33:40.286 clat (usec): min=2418, max=9988, avg=4269.06, stdev=403.93 00:33:40.286 lat (usec): min=2437, max=10005, avg=4284.76, stdev=403.80 00:33:40.286 clat percentiles (usec): 00:33:40.286 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:33:40.286 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4178], 00:33:40.286 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 5211], 00:33:40.286 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 7898], 99.95th=[ 9896], 00:33:40.286 | 99.99th=[10028] 00:33:40.286 bw ( KiB/s): min=12288, max=15232, per=24.90%, avg=14663.11, stdev=940.85, samples=9 00:33:40.286 iops : min= 1536, max= 1904, avg=1832.89, stdev=117.61, samples=9 00:33:40.286 lat (msec) : 4=0.49%, 10=99.51% 00:33:40.286 cpu : usr=94.60%, sys=4.18%, ctx=11, majf=0, minf=0 00:33:40.286 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.286 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.286 issued rwts: total=9200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.286 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:40.286 filename1: (groupid=0, jobs=1): err= 0: pid=108929: Fri Dec 6 13:27:46 2024 00:33:40.286 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5002msec) 00:33:40.286 slat (nsec): min=4045, max=51043, avg=10767.44, stdev=4387.69 00:33:40.286 clat (usec): min=3111, max=10013, avg=4296.22, stdev=395.56 00:33:40.286 lat (usec): min=3119, max=10027, avg=4306.99, stdev=395.21 00:33:40.286 clat percentiles (usec): 00:33:40.286 | 1.00th=[ 4047], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4113], 00:33:40.286 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:40.286 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5276], 00:33:40.286 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[10028], 00:33:40.286 | 99.99th=[10028] 00:33:40.286 bw ( KiB/s): min=12288, max=15232, per=24.90%, avg=14663.11, stdev=940.85, samples=9 00:33:40.286 iops : min= 1536, max= 1904, avg=1832.89, stdev=117.61, samples=9 00:33:40.286 lat (msec) : 4=0.39%, 10=99.58%, 20=0.03% 00:33:40.286 cpu : usr=94.48%, sys=4.34%, ctx=4, majf=0, minf=0 00:33:40.286 IO depths : 1=11.2%, 2=25.0%, 4=50.0%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.286 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.286 issued rwts: total=9200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.286 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:40.286 00:33:40.286 Run status group 0 (all jobs): 00:33:40.286 READ: bw=57.5MiB/s (60.3MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io=288MiB (302MB), run=5001-5002msec 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.544 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.545 00:33:40.545 real 0m23.542s 00:33:40.545 user 2m5.466s 00:33:40.545 sys 0m5.269s 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.545 13:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 ************************************ 00:33:40.545 END TEST fio_dif_rand_params 00:33:40.545 ************************************ 00:33:40.545 13:27:46 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:40.545 13:27:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.545 13:27:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.545 13:27:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 ************************************ 00:33:40.545 START TEST fio_dif_digest 00:33:40.545 ************************************ 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.545 13:27:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 bdev_null0 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 [2024-12-06 13:27:47.023826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.545 { 00:33:40.545 "params": { 00:33:40.545 "name": "Nvme$subsystem", 00:33:40.545 "trtype": "$TEST_TRANSPORT", 00:33:40.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.545 "adrfam": "ipv4", 00:33:40.545 "trsvcid": "$NVMF_PORT", 00:33:40.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.545 "hdgst": ${hdgst:-false}, 00:33:40.545 "ddgst": ${ddgst:-false} 00:33:40.545 }, 00:33:40.545 "method": "bdev_nvme_attach_controller" 00:33:40.545 } 00:33:40.545 EOF 00:33:40.545 )") 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.545 "params": { 00:33:40.545 "name": "Nvme0", 00:33:40.545 "trtype": "tcp", 00:33:40.545 "traddr": "10.0.0.3", 00:33:40.545 "adrfam": "ipv4", 00:33:40.545 "trsvcid": "4420", 00:33:40.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.545 "hdgst": true, 00:33:40.545 "ddgst": true 00:33:40.545 }, 00:33:40.545 "method": "bdev_nvme_attach_controller" 00:33:40.545 }' 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:40.545 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.803 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.803 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.803 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:40.803 13:27:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.804 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:40.804 ... 00:33:40.804 fio-3.35 00:33:40.804 Starting 3 threads 00:33:53.021 00:33:53.021 filename0: (groupid=0, jobs=1): err= 0: pid=109035: Fri Dec 6 13:27:57 2024 00:33:53.021 read: IOPS=230, BW=28.9MiB/s (30.3MB/s)(289MiB/10008msec) 00:33:53.021 slat (nsec): min=8021, max=42851, avg=13769.44, stdev=3498.12 00:33:53.021 clat (usec): min=7735, max=56776, avg=12972.14, stdev=2348.03 00:33:53.021 lat (usec): min=7759, max=56790, avg=12985.91, stdev=2348.18 00:33:53.021 clat percentiles (usec): 00:33:53.021 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:33:53.021 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:33:53.021 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:33:53.021 | 99.00th=[16319], 99.50th=[18220], 99.90th=[54789], 99.95th=[55837], 00:33:53.021 | 99.99th=[56886] 00:33:53.021 bw ( KiB/s): min=25344, max=30464, per=39.09%, avg=29520.84, stdev=1291.48, samples=19 00:33:53.021 iops : min= 198, max= 238, avg=230.63, stdev=10.09, samples=19 00:33:53.021 lat (msec) : 10=0.04%, 20=99.70%, 100=0.26% 00:33:53.021 cpu : usr=92.48%, sys=6.08%, ctx=119, majf=0, minf=0 00:33:53.021 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.021 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.021 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:53.021 filename0: (groupid=0, jobs=1): err= 0: pid=109036: Fri Dec 6 13:27:57 2024 00:33:53.021 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(243MiB/10003msec) 00:33:53.021 slat (nsec): min=7939, max=51409, avg=13877.36, stdev=4104.93 00:33:53.021 clat (usec): min=5974, max=20987, avg=15402.40, stdev=1354.44 00:33:53.022 lat (usec): min=5989, max=21005, avg=15416.28, stdev=1354.38 00:33:53.022 clat percentiles (usec): 00:33:53.022 | 1.00th=[10290], 5.00th=[13435], 10.00th=[13960], 20.00th=[14484], 00:33:53.022 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:33:53.022 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:33:53.022 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20579], 99.95th=[21103], 00:33:53.022 | 99.99th=[21103] 00:33:53.022 bw ( KiB/s): min=24064, max=26112, per=32.95%, avg=24885.89, stdev=439.93, samples=19 00:33:53.022 iops : min= 188, max= 204, avg=194.42, stdev= 3.44, samples=19 00:33:53.022 lat (msec) : 10=0.87%, 20=99.02%, 50=0.10% 00:33:53.022 cpu : usr=92.89%, sys=5.78%, ctx=13, majf=0, minf=0 00:33:53.022 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.022 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.022 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:53.022 filename0: (groupid=0, jobs=1): err= 0: pid=109037: Fri Dec 6 13:27:57 2024 00:33:53.022 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(206MiB/10003msec) 00:33:53.022 slat (nsec): min=7480, max=44347, avg=13905.86, stdev=4558.75 00:33:53.022 clat (usec): min=9136, max=25678, avg=18188.14, stdev=1231.27 00:33:53.022 lat (usec): min=9148, max=25693, avg=18202.05, stdev=1231.39 00:33:53.022 clat percentiles (usec): 00:33:53.022 | 1.00th=[13435], 5.00th=[16712], 10.00th=[16909], 20.00th=[17433], 00:33:53.022 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:33:53.022 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19268], 95.00th=[19792], 00:33:53.022 | 99.00th=[20841], 99.50th=[23462], 99.90th=[25035], 99.95th=[25560], 00:33:53.022 | 99.99th=[25560] 00:33:53.022 bw ( KiB/s): min=20480, max=22528, per=27.90%, avg=21072.84, stdev=490.59, samples=19 00:33:53.022 iops : min= 160, max= 176, avg=164.63, stdev= 3.83, samples=19 00:33:53.022 lat (msec) : 10=0.12%, 20=96.36%, 50=3.52% 00:33:53.022 cpu : usr=92.81%, sys=5.75%, ctx=110, majf=0, minf=0 00:33:53.022 IO depths : 1=7.5%, 2=92.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.022 issued rwts: total=1648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.022 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:53.022 00:33:53.022 Run status group 0 (all jobs): 00:33:53.022 READ: bw=73.8MiB/s (77.3MB/s), 20.6MiB/s-28.9MiB/s (21.6MB/s-30.3MB/s), io=738MiB (774MB), run=10003-10008msec 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.022 00:33:53.022 real 0m10.916s 00:33:53.022 user 0m28.427s 00:33:53.022 sys 0m1.996s 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:53.022 13:27:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:53.022 ************************************ 00:33:53.022 END TEST fio_dif_digest 00:33:53.022 ************************************ 00:33:53.022 13:27:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:53.022 13:27:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:53.022 13:27:57 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.022 13:27:57 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:53.022 13:27:57 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.022 13:27:57 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:53.022 13:27:57 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.022 13:27:57 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.022 rmmod nvme_tcp 00:33:53.022 rmmod nvme_fabrics 00:33:53.022 rmmod nvme_keyring 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 108311 ']' 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 108311 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 108311 ']' 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 108311 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108311 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:53.022 killing process with pid 108311 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108311' 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 108311 00:33:53.022 13:27:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 108311 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:53.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:53.022 Waiting for block devices as requested 00:33:53.022 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:53.022 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:53.022 13:27:58 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:53.022 13:27:59 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:53.022 13:27:59 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:53.022 13:27:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.022 13:27:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:53.022 13:27:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.022 13:27:59 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:33:53.022 00:33:53.022 real 0m59.402s 00:33:53.022 user 3m51.166s 00:33:53.022 sys 0m14.383s 00:33:53.022 13:27:59 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:53.022 13:27:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:53.022 ************************************ 00:33:53.022 END TEST nvmf_dif 00:33:53.022 ************************************ 00:33:53.022 13:27:59 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:53.022 13:27:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:53.022 13:27:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:53.022 13:27:59 -- common/autotest_common.sh@10 -- # set +x 00:33:53.022 ************************************ 00:33:53.022 START TEST nvmf_abort_qd_sizes 00:33:53.022 ************************************ 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:53.023 * Looking for test storage... 00:33:53.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:53.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.023 --rc genhtml_branch_coverage=1 00:33:53.023 --rc genhtml_function_coverage=1 00:33:53.023 --rc genhtml_legend=1 00:33:53.023 --rc geninfo_all_blocks=1 00:33:53.023 --rc geninfo_unexecuted_blocks=1 00:33:53.023 00:33:53.023 ' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:53.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.023 --rc genhtml_branch_coverage=1 00:33:53.023 --rc genhtml_function_coverage=1 00:33:53.023 --rc genhtml_legend=1 00:33:53.023 --rc geninfo_all_blocks=1 00:33:53.023 --rc geninfo_unexecuted_blocks=1 00:33:53.023 00:33:53.023 ' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:53.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.023 --rc genhtml_branch_coverage=1 00:33:53.023 --rc genhtml_function_coverage=1 00:33:53.023 --rc genhtml_legend=1 00:33:53.023 --rc geninfo_all_blocks=1 00:33:53.023 --rc geninfo_unexecuted_blocks=1 00:33:53.023 00:33:53.023 ' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:53.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.023 --rc genhtml_branch_coverage=1 00:33:53.023 --rc genhtml_function_coverage=1 00:33:53.023 --rc genhtml_legend=1 00:33:53.023 --rc geninfo_all_blocks=1 00:33:53.023 --rc geninfo_unexecuted_blocks=1 00:33:53.023 00:33:53.023 ' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:53.023 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:53.023 13:27:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:53.024 Cannot find device "nvmf_init_br" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:53.024 Cannot find device "nvmf_init_br2" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:53.024 Cannot find device "nvmf_tgt_br" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:53.024 Cannot find device "nvmf_tgt_br2" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:53.024 Cannot find device "nvmf_init_br" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:53.024 Cannot find device "nvmf_init_br2" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:53.024 Cannot find device "nvmf_tgt_br" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:53.024 Cannot find device "nvmf_tgt_br2" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:53.024 Cannot find device "nvmf_br" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:53.024 Cannot find device "nvmf_init_if" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:53.024 Cannot find device "nvmf_init_if2" 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:53.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:53.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:53.024 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:53.285 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:53.285 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:53.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:53.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:33:53.286 00:33:53.286 --- 10.0.0.3 ping statistics --- 00:33:53.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.286 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:53.286 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:53.286 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:33:53.286 00:33:53.286 --- 10.0.0.4 ping statistics --- 00:33:53.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.286 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:53.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:33:53.286 00:33:53.286 --- 10.0.0.1 ping statistics --- 00:33:53.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.286 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:53.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:33:53.286 00:33:53.286 --- 10.0.0.2 ping statistics --- 00:33:53.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.286 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:53.286 13:27:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:54.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:54.222 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:54.222 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:54.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=109687 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 109687 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 109687 ']' 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:54.222 13:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:54.222 [2024-12-06 13:28:00.710297] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:33:54.222 [2024-12-06 13:28:00.710591] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.481 [2024-12-06 13:28:00.869743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.481 [2024-12-06 13:28:00.911679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.481 [2024-12-06 13:28:00.911932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.481 [2024-12-06 13:28:00.912195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.481 [2024-12-06 13:28:00.912351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.481 [2024-12-06 13:28:00.912529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.481 [2024-12-06 13:28:00.913479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.481 [2024-12-06 13:28:00.913544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.481 [2024-12-06 13:28:00.914099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.481 [2024-12-06 13:28:00.914109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:55.417 13:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:55.417 ************************************ 00:33:55.417 START TEST spdk_target_abort 00:33:55.417 ************************************ 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:55.417 spdk_targetn1 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:55.417 [2024-12-06 13:28:01.871661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:55.417 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:55.418 [2024-12-06 13:28:01.905142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:55.418 13:28:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.765 Initializing NVMe Controllers 00:33:58.765 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:33:58.765 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:58.765 Initialization complete. Launching workers. 00:33:58.765 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11290, failed: 0 00:33:58.765 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1054, failed to submit 10236 00:33:58.765 success 786, unsuccessful 268, failed 0 00:33:58.765 13:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:58.765 13:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:02.050 Initializing NVMe Controllers 00:34:02.050 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:34:02.050 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:02.050 Initialization complete. Launching workers. 00:34:02.050 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5974, failed: 0 00:34:02.050 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1278, failed to submit 4696 00:34:02.050 success 245, unsuccessful 1033, failed 0 00:34:02.050 13:28:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:02.050 13:28:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:05.338 Initializing NVMe Controllers 00:34:05.338 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:34:05.338 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:05.338 Initialization complete. Launching workers. 00:34:05.338 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30000, failed: 0 00:34:05.339 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2777, failed to submit 27223 00:34:05.339 success 436, unsuccessful 2341, failed 0 00:34:05.339 13:28:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:05.339 13:28:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.339 13:28:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:05.339 13:28:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.339 13:28:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:05.339 13:28:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.339 13:28:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:06.276 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.276 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 109687 00:34:06.276 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 109687 ']' 00:34:06.276 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 109687 00:34:06.276 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109687 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.535 killing process with pid 109687 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109687' 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 109687 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 109687 00:34:06.535 00:34:06.535 real 0m11.179s 00:34:06.535 user 0m46.177s 00:34:06.535 sys 0m1.568s 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.535 13:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:06.535 ************************************ 00:34:06.535 END TEST spdk_target_abort 00:34:06.535 ************************************ 00:34:06.535 13:28:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:06.535 13:28:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:06.535 13:28:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.535 13:28:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:06.535 ************************************ 00:34:06.535 START TEST kernel_target_abort 00:34:06.535 ************************************ 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.535 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:06.536 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:06.794 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:06.794 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:07.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:07.053 Waiting for block devices as requested 00:34:07.053 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:07.053 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:34:07.315 No valid GPT data, bailing 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:34:07.315 No valid GPT data, bailing 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:34:07.315 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:34:07.316 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:34:07.574 No valid GPT data, bailing 00:34:07.574 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:34:07.574 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:07.574 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:34:07.575 No valid GPT data, bailing 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:07.575 13:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 --hostid=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 -a 10.0.0.1 -t tcp -s 4420 00:34:07.575 00:34:07.575 Discovery Log Number of Records 2, Generation counter 2 00:34:07.575 =====Discovery Log Entry 0====== 00:34:07.575 trtype: tcp 00:34:07.575 adrfam: ipv4 00:34:07.575 subtype: current discovery subsystem 00:34:07.575 treq: not specified, sq flow control disable supported 00:34:07.575 portid: 1 00:34:07.575 trsvcid: 4420 00:34:07.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:07.575 traddr: 10.0.0.1 00:34:07.575 eflags: none 00:34:07.575 sectype: none 00:34:07.575 =====Discovery Log Entry 1====== 00:34:07.575 trtype: tcp 00:34:07.575 adrfam: ipv4 00:34:07.575 subtype: nvme subsystem 00:34:07.575 treq: not specified, sq flow control disable supported 00:34:07.575 portid: 1 00:34:07.575 trsvcid: 4420 00:34:07.575 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:07.575 traddr: 10.0.0.1 00:34:07.575 eflags: none 00:34:07.575 sectype: none 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:07.575 13:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:10.862 Initializing NVMe Controllers 00:34:10.862 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:10.862 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:10.862 Initialization complete. Launching workers. 00:34:10.862 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34917, failed: 0 00:34:10.862 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34917, failed to submit 0 00:34:10.862 success 0, unsuccessful 34917, failed 0 00:34:10.863 13:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:10.863 13:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:14.147 Initializing NVMe Controllers 00:34:14.147 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:14.147 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:14.147 Initialization complete. Launching workers. 00:34:14.147 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65281, failed: 0 00:34:14.147 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28560, failed to submit 36721 00:34:14.147 success 0, unsuccessful 28560, failed 0 00:34:14.147 13:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:14.147 13:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:17.433 Initializing NVMe Controllers 00:34:17.433 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:17.433 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:17.433 Initialization complete. Launching workers. 00:34:17.433 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81609, failed: 0 00:34:17.433 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20386, failed to submit 61223 00:34:17.433 success 0, unsuccessful 20386, failed 0 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:17.433 13:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:18.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:19.900 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:19.900 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:34:20.158 00:34:20.158 real 0m13.447s 00:34:20.158 user 0m6.496s 00:34:20.158 sys 0m4.416s 00:34:20.158 13:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:20.158 13:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.158 ************************************ 00:34:20.158 END TEST kernel_target_abort 00:34:20.158 ************************************ 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:20.158 rmmod nvme_tcp 00:34:20.158 rmmod nvme_fabrics 00:34:20.158 rmmod nvme_keyring 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 109687 ']' 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 109687 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 109687 ']' 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 109687 00:34:20.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109687) - No such process 00:34:20.158 Process with pid 109687 is not found 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 109687 is not found' 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:20.158 13:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:20.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:20.724 Waiting for block devices as requested 00:34:20.724 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:20.724 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:20.724 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:34:20.982 00:34:20.982 real 0m28.355s 00:34:20.982 user 0m54.024s 00:34:20.982 sys 0m7.452s 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:20.982 ************************************ 00:34:20.982 13:28:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.982 END TEST nvmf_abort_qd_sizes 00:34:20.982 ************************************ 00:34:21.241 13:28:27 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:34:21.241 13:28:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:21.241 13:28:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.241 13:28:27 -- common/autotest_common.sh@10 -- # set +x 00:34:21.241 ************************************ 00:34:21.241 START TEST keyring_file 00:34:21.241 ************************************ 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:34:21.241 * Looking for test storage... 00:34:21.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.241 13:28:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.241 --rc genhtml_branch_coverage=1 00:34:21.241 --rc genhtml_function_coverage=1 00:34:21.241 --rc genhtml_legend=1 00:34:21.241 --rc geninfo_all_blocks=1 00:34:21.241 --rc geninfo_unexecuted_blocks=1 00:34:21.241 00:34:21.241 ' 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.241 --rc genhtml_branch_coverage=1 00:34:21.241 --rc genhtml_function_coverage=1 00:34:21.241 --rc genhtml_legend=1 00:34:21.241 --rc geninfo_all_blocks=1 00:34:21.241 --rc geninfo_unexecuted_blocks=1 00:34:21.241 00:34:21.241 ' 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.241 --rc genhtml_branch_coverage=1 00:34:21.241 --rc genhtml_function_coverage=1 00:34:21.241 --rc genhtml_legend=1 00:34:21.241 --rc geninfo_all_blocks=1 00:34:21.241 --rc geninfo_unexecuted_blocks=1 00:34:21.241 00:34:21.241 ' 00:34:21.241 13:28:27 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:21.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.241 --rc genhtml_branch_coverage=1 00:34:21.241 --rc genhtml_function_coverage=1 00:34:21.241 --rc genhtml_legend=1 00:34:21.241 --rc geninfo_all_blocks=1 00:34:21.241 --rc geninfo_unexecuted_blocks=1 00:34:21.241 00:34:21.241 ' 00:34:21.241 13:28:27 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:34:21.241 13:28:27 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.241 13:28:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:21.242 13:28:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.242 13:28:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.242 13:28:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.242 13:28:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.242 13:28:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.242 13:28:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.242 13:28:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.242 13:28:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:21.242 13:28:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:21.242 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:21.242 13:28:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:21.242 13:28:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:21.242 13:28:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:21.242 13:28:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:21.242 13:28:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:21.242 13:28:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jvOMBiKuLu 00:34:21.242 13:28:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:21.242 13:28:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:21.500 13:28:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jvOMBiKuLu 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jvOMBiKuLu 00:34:21.501 13:28:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.jvOMBiKuLu 00:34:21.501 13:28:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ptJ30v7H9p 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:21.501 13:28:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:21.501 13:28:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:21.501 13:28:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:21.501 13:28:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:21.501 13:28:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:21.501 13:28:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ptJ30v7H9p 00:34:21.501 13:28:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ptJ30v7H9p 00:34:21.501 13:28:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ptJ30v7H9p 00:34:21.501 13:28:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=110609 00:34:21.501 13:28:27 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:21.501 13:28:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 110609 00:34:21.501 13:28:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110609 ']' 00:34:21.501 13:28:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.501 13:28:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:21.501 13:28:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.501 13:28:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:21.501 13:28:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:21.501 [2024-12-06 13:28:27.946566] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:34:21.501 [2024-12-06 13:28:27.946667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110609 ] 00:34:21.759 [2024-12-06 13:28:28.097588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.759 [2024-12-06 13:28:28.136808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:22.017 13:28:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:22.017 [2024-12-06 13:28:28.340798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.017 null0 00:34:22.017 [2024-12-06 13:28:28.372766] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:22.017 [2024-12-06 13:28:28.372980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.017 13:28:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:22.017 [2024-12-06 13:28:28.400764] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:22.017 2024/12/06 13:28:28 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:34:22.017 request: 00:34:22.017 { 00:34:22.017 "method": "nvmf_subsystem_add_listener", 00:34:22.017 "params": { 00:34:22.017 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.017 "secure_channel": false, 00:34:22.017 "listen_address": { 00:34:22.017 "trtype": "tcp", 00:34:22.017 "traddr": "127.0.0.1", 00:34:22.017 "trsvcid": "4420" 00:34:22.017 } 00:34:22.017 } 00:34:22.017 } 00:34:22.017 Got JSON-RPC error response 00:34:22.017 GoRPCClient: error on JSON-RPC call 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:22.017 13:28:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=110632 00:34:22.017 13:28:28 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:22.017 13:28:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 110632 /var/tmp/bperf.sock 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110632 ']' 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.017 13:28:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:22.017 [2024-12-06 13:28:28.466270] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:34:22.017 [2024-12-06 13:28:28.466376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110632 ] 00:34:22.274 [2024-12-06 13:28:28.617229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.274 [2024-12-06 13:28:28.656913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.274 13:28:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.274 13:28:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:22.274 13:28:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:22.274 13:28:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:22.532 13:28:29 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ptJ30v7H9p 00:34:22.532 13:28:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ptJ30v7H9p 00:34:23.098 13:28:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:23.098 13:28:29 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:23.099 13:28:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:23.099 13:28:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:23.099 13:28:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:23.357 13:28:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jvOMBiKuLu == \/\t\m\p\/\t\m\p\.\j\v\O\M\B\i\K\u\L\u ]] 00:34:23.357 13:28:29 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:23.357 13:28:29 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:23.357 13:28:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:23.357 13:28:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:23.357 13:28:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:23.615 13:28:30 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ptJ30v7H9p == \/\t\m\p\/\t\m\p\.\p\t\J\3\0\v\7\H\9\p ]] 00:34:23.615 13:28:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:23.615 13:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:23.615 13:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:23.615 13:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:23.615 13:28:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:23.615 13:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:23.873 13:28:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:23.873 13:28:30 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:23.873 13:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:23.873 13:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:23.873 13:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:23.873 13:28:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:23.873 13:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:24.437 13:28:30 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:24.437 13:28:30 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:24.437 13:28:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:24.695 [2024-12-06 13:28:30.997702] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:24.695 nvme0n1 00:34:24.695 13:28:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:24.695 13:28:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:24.695 13:28:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.695 13:28:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.695 13:28:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.695 13:28:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:24.952 13:28:31 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:24.952 13:28:31 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:24.952 13:28:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:24.952 13:28:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.952 13:28:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.952 13:28:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:24.952 13:28:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.284 13:28:31 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:25.284 13:28:31 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:25.559 Running I/O for 1 seconds... 00:34:26.495 10891.00 IOPS, 42.54 MiB/s 00:34:26.495 Latency(us) 00:34:26.495 [2024-12-06T13:28:33.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.495 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:26.495 nvme0n1 : 1.01 10935.66 42.72 0.00 0.00 11668.24 5123.72 21686.46 00:34:26.495 [2024-12-06T13:28:33.023Z] =================================================================================================================== 00:34:26.495 [2024-12-06T13:28:33.023Z] Total : 10935.66 42.72 0.00 0.00 11668.24 5123.72 21686.46 00:34:26.495 { 00:34:26.495 "results": [ 00:34:26.495 { 00:34:26.495 "job": "nvme0n1", 00:34:26.495 "core_mask": "0x2", 00:34:26.495 "workload": "randrw", 00:34:26.495 "percentage": 50, 00:34:26.495 "status": "finished", 00:34:26.495 "queue_depth": 128, 00:34:26.495 "io_size": 4096, 00:34:26.495 "runtime": 1.007712, 00:34:26.495 "iops": 10935.6641580134, 00:34:26.496 "mibps": 42.717438117239844, 00:34:26.496 "io_failed": 0, 00:34:26.496 "io_timeout": 0, 00:34:26.496 "avg_latency_us": 11668.242288731233, 00:34:26.496 "min_latency_us": 5123.723636363637, 00:34:26.496 "max_latency_us": 21686.458181818183 00:34:26.496 } 00:34:26.496 ], 00:34:26.496 "core_count": 1 00:34:26.496 } 00:34:26.496 13:28:32 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:26.496 13:28:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:26.755 13:28:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:26.755 13:28:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:26.755 13:28:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:26.755 13:28:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:26.755 13:28:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:26.755 13:28:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.322 13:28:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:27.322 13:28:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:27.322 13:28:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:27.322 13:28:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:27.322 13:28:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:27.322 13:28:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:27.323 13:28:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.582 13:28:33 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:27.582 13:28:33 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:27.582 13:28:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:27.582 13:28:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:27.582 13:28:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:27.582 13:28:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.582 13:28:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:27.582 13:28:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.582 13:28:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:27.582 13:28:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:27.841 [2024-12-06 13:28:34.237777] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:27.841 [2024-12-06 13:28:34.237886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fa1f0 (107): Transport endpoint is not connected 00:34:27.841 [2024-12-06 13:28:34.238876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fa1f0 (9): Bad file descriptor 00:34:27.841 [2024-12-06 13:28:34.239873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:27.841 [2024-12-06 13:28:34.239902] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:27.841 [2024-12-06 13:28:34.239914] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:27.841 [2024-12-06 13:28:34.239925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:27.841 2024/12/06 13:28:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:27.841 request: 00:34:27.841 { 00:34:27.841 "method": "bdev_nvme_attach_controller", 00:34:27.841 "params": { 00:34:27.841 "name": "nvme0", 00:34:27.841 "trtype": "tcp", 00:34:27.841 "traddr": "127.0.0.1", 00:34:27.841 "adrfam": "ipv4", 00:34:27.841 "trsvcid": "4420", 00:34:27.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:27.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:27.841 "prchk_reftag": false, 00:34:27.841 "prchk_guard": false, 00:34:27.841 "hdgst": false, 00:34:27.841 "ddgst": false, 00:34:27.841 "psk": "key1", 00:34:27.841 "allow_unrecognized_csi": false 00:34:27.841 } 00:34:27.841 } 00:34:27.841 Got JSON-RPC error response 00:34:27.841 GoRPCClient: error on JSON-RPC call 00:34:27.841 13:28:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:27.841 13:28:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:27.841 13:28:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:27.841 13:28:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:27.841 13:28:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:27.841 13:28:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:27.841 13:28:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:27.841 13:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:27.841 13:28:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.841 13:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:28.100 13:28:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:28.100 13:28:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:28.100 13:28:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:28.100 13:28:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:28.100 13:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:28.100 13:28:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:28.100 13:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:28.667 13:28:34 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:28.667 13:28:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:28.667 13:28:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:28.925 13:28:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:28.925 13:28:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:29.183 13:28:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:29.183 13:28:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:29.183 13:28:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:29.442 13:28:35 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:29.442 13:28:35 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.jvOMBiKuLu 00:34:29.442 13:28:35 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:29.442 13:28:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:29.442 13:28:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:29.442 13:28:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:29.442 13:28:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.442 13:28:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:29.442 13:28:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.442 13:28:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:29.442 13:28:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:29.701 [2024-12-06 13:28:36.091255] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jvOMBiKuLu': 0100660 00:34:29.701 [2024-12-06 13:28:36.091309] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:29.701 2024/12/06 13:28:36 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.jvOMBiKuLu], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:34:29.701 request: 00:34:29.701 { 00:34:29.701 "method": "keyring_file_add_key", 00:34:29.701 "params": { 00:34:29.701 "name": "key0", 00:34:29.701 "path": "/tmp/tmp.jvOMBiKuLu" 00:34:29.701 } 00:34:29.701 } 00:34:29.701 Got JSON-RPC error response 00:34:29.701 GoRPCClient: error on JSON-RPC call 00:34:29.701 13:28:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:29.701 13:28:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:29.701 13:28:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:29.702 13:28:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:29.702 13:28:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.jvOMBiKuLu 00:34:29.702 13:28:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:29.702 13:28:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jvOMBiKuLu 00:34:29.961 13:28:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.jvOMBiKuLu 00:34:29.961 13:28:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:29.961 13:28:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:29.961 13:28:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:29.961 13:28:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:29.961 13:28:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:29.961 13:28:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:30.220 13:28:36 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:30.220 13:28:36 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:30.220 13:28:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:30.220 13:28:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:30.220 13:28:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:30.220 13:28:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.220 13:28:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:30.220 13:28:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.220 13:28:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:30.220 13:28:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:30.479 [2024-12-06 13:28:36.903476] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.jvOMBiKuLu': No such file or directory 00:34:30.479 [2024-12-06 13:28:36.903524] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:30.479 [2024-12-06 13:28:36.903547] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:30.479 [2024-12-06 13:28:36.903557] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:30.479 [2024-12-06 13:28:36.903567] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:30.479 [2024-12-06 13:28:36.903576] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:30.479 2024/12/06 13:28:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:34:30.479 request: 00:34:30.479 { 00:34:30.479 "method": "bdev_nvme_attach_controller", 00:34:30.479 "params": { 00:34:30.479 "name": "nvme0", 00:34:30.479 "trtype": "tcp", 00:34:30.479 "traddr": "127.0.0.1", 00:34:30.479 "adrfam": "ipv4", 00:34:30.479 "trsvcid": "4420", 00:34:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:30.479 "prchk_reftag": false, 00:34:30.479 "prchk_guard": false, 00:34:30.479 "hdgst": false, 00:34:30.479 "ddgst": false, 00:34:30.479 "psk": "key0", 00:34:30.479 "allow_unrecognized_csi": false 00:34:30.479 } 00:34:30.479 } 00:34:30.479 Got JSON-RPC error response 00:34:30.479 GoRPCClient: error on JSON-RPC call 00:34:30.479 13:28:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:30.479 13:28:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:30.479 13:28:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:30.479 13:28:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:30.479 13:28:36 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:30.479 13:28:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:30.738 13:28:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UtQWGVIAmk 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:30.738 13:28:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:30.738 13:28:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:30.738 13:28:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:30.738 13:28:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:30.738 13:28:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:30.738 13:28:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UtQWGVIAmk 00:34:30.738 13:28:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UtQWGVIAmk 00:34:30.996 13:28:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.UtQWGVIAmk 00:34:30.996 13:28:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UtQWGVIAmk 00:34:30.996 13:28:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UtQWGVIAmk 00:34:31.255 13:28:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:31.255 13:28:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:31.512 nvme0n1 00:34:31.512 13:28:37 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:31.512 13:28:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:31.512 13:28:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:31.512 13:28:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:31.512 13:28:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:31.512 13:28:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:31.770 13:28:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:31.770 13:28:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:31.770 13:28:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:32.027 13:28:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:32.027 13:28:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:32.027 13:28:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:32.027 13:28:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:32.027 13:28:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:32.593 13:28:38 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:32.593 13:28:38 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:32.593 13:28:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:32.593 13:28:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:32.593 13:28:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:32.593 13:28:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:32.593 13:28:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:32.849 13:28:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:32.849 13:28:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:32.849 13:28:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:33.107 13:28:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:33.107 13:28:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:33.107 13:28:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:33.364 13:28:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:33.364 13:28:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UtQWGVIAmk 00:34:33.364 13:28:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UtQWGVIAmk 00:34:33.929 13:28:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ptJ30v7H9p 00:34:33.929 13:28:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ptJ30v7H9p 00:34:33.929 13:28:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:33.929 13:28:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:34.497 nvme0n1 00:34:34.497 13:28:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:34.497 13:28:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:34.756 13:28:41 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:34.756 "subsystems": [ 00:34:34.756 { 00:34:34.756 "subsystem": "keyring", 00:34:34.756 "config": [ 00:34:34.756 { 00:34:34.756 "method": "keyring_file_add_key", 00:34:34.756 "params": { 00:34:34.756 "name": "key0", 00:34:34.756 "path": "/tmp/tmp.UtQWGVIAmk" 00:34:34.756 } 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "method": "keyring_file_add_key", 00:34:34.756 "params": { 00:34:34.756 "name": "key1", 00:34:34.756 "path": "/tmp/tmp.ptJ30v7H9p" 00:34:34.756 } 00:34:34.756 } 00:34:34.756 ] 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "subsystem": "iobuf", 00:34:34.756 "config": [ 00:34:34.756 { 00:34:34.756 "method": "iobuf_set_options", 00:34:34.756 "params": { 00:34:34.756 "enable_numa": false, 00:34:34.756 "large_bufsize": 135168, 00:34:34.756 "large_pool_count": 1024, 00:34:34.756 "small_bufsize": 8192, 00:34:34.756 "small_pool_count": 8192 00:34:34.756 } 00:34:34.756 } 00:34:34.756 ] 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "subsystem": "sock", 00:34:34.756 "config": [ 00:34:34.756 { 00:34:34.756 "method": "sock_set_default_impl", 00:34:34.756 "params": { 00:34:34.756 "impl_name": "posix" 00:34:34.756 } 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "method": "sock_impl_set_options", 00:34:34.756 "params": { 00:34:34.756 "enable_ktls": false, 00:34:34.756 "enable_placement_id": 0, 00:34:34.756 "enable_quickack": false, 00:34:34.756 "enable_recv_pipe": true, 00:34:34.756 "enable_zerocopy_send_client": false, 00:34:34.756 "enable_zerocopy_send_server": true, 00:34:34.756 "impl_name": "ssl", 00:34:34.756 "recv_buf_size": 4096, 00:34:34.756 "send_buf_size": 4096, 00:34:34.756 "tls_version": 0, 00:34:34.756 "zerocopy_threshold": 0 00:34:34.756 } 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "method": "sock_impl_set_options", 00:34:34.756 "params": { 00:34:34.756 "enable_ktls": false, 00:34:34.756 "enable_placement_id": 0, 00:34:34.756 "enable_quickack": false, 00:34:34.756 "enable_recv_pipe": true, 00:34:34.756 "enable_zerocopy_send_client": false, 00:34:34.756 "enable_zerocopy_send_server": true, 00:34:34.756 "impl_name": "posix", 00:34:34.756 "recv_buf_size": 2097152, 00:34:34.756 "send_buf_size": 2097152, 00:34:34.756 "tls_version": 0, 00:34:34.756 "zerocopy_threshold": 0 00:34:34.756 } 00:34:34.756 } 00:34:34.756 ] 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "subsystem": "vmd", 00:34:34.756 "config": [] 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "subsystem": "accel", 00:34:34.756 "config": [ 00:34:34.756 { 00:34:34.756 "method": "accel_set_options", 00:34:34.756 "params": { 00:34:34.756 "buf_count": 2048, 00:34:34.756 "large_cache_size": 16, 00:34:34.756 "sequence_count": 2048, 00:34:34.756 "small_cache_size": 128, 00:34:34.756 "task_count": 2048 00:34:34.756 } 00:34:34.756 } 00:34:34.756 ] 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "subsystem": "bdev", 00:34:34.756 "config": [ 00:34:34.756 { 00:34:34.756 "method": "bdev_set_options", 00:34:34.756 "params": { 00:34:34.756 "bdev_auto_examine": true, 00:34:34.756 "bdev_io_cache_size": 256, 00:34:34.756 "bdev_io_pool_size": 65535, 00:34:34.756 "iobuf_large_cache_size": 16, 00:34:34.756 "iobuf_small_cache_size": 128 00:34:34.756 } 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "method": "bdev_raid_set_options", 00:34:34.756 "params": { 00:34:34.756 "process_max_bandwidth_mb_sec": 0, 00:34:34.756 "process_window_size_kb": 1024 00:34:34.756 } 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "method": "bdev_iscsi_set_options", 00:34:34.756 "params": { 00:34:34.756 "timeout_sec": 30 00:34:34.756 } 00:34:34.756 }, 00:34:34.756 { 00:34:34.756 "method": "bdev_nvme_set_options", 00:34:34.756 "params": { 00:34:34.756 "action_on_timeout": "none", 00:34:34.756 "allow_accel_sequence": false, 00:34:34.756 "arbitration_burst": 0, 00:34:34.756 "bdev_retry_count": 3, 00:34:34.756 "ctrlr_loss_timeout_sec": 0, 00:34:34.756 "delay_cmd_submit": true, 00:34:34.756 "dhchap_dhgroups": [ 00:34:34.756 "null", 00:34:34.756 "ffdhe2048", 00:34:34.756 "ffdhe3072", 00:34:34.756 "ffdhe4096", 00:34:34.756 "ffdhe6144", 00:34:34.756 "ffdhe8192" 00:34:34.756 ], 00:34:34.756 "dhchap_digests": [ 00:34:34.756 "sha256", 00:34:34.756 "sha384", 00:34:34.756 "sha512" 00:34:34.756 ], 00:34:34.756 "disable_auto_failback": false, 00:34:34.756 "fast_io_fail_timeout_sec": 0, 00:34:34.756 "generate_uuids": false, 00:34:34.756 "high_priority_weight": 0, 00:34:34.757 "io_path_stat": false, 00:34:34.757 "io_queue_requests": 512, 00:34:34.757 "keep_alive_timeout_ms": 10000, 00:34:34.757 "low_priority_weight": 0, 00:34:34.757 "medium_priority_weight": 0, 00:34:34.757 "nvme_adminq_poll_period_us": 10000, 00:34:34.757 "nvme_error_stat": false, 00:34:34.757 "nvme_ioq_poll_period_us": 0, 00:34:34.757 "rdma_cm_event_timeout_ms": 0, 00:34:34.757 "rdma_max_cq_size": 0, 00:34:34.757 "rdma_srq_size": 0, 00:34:34.757 "reconnect_delay_sec": 0, 00:34:34.757 "timeout_admin_us": 0, 00:34:34.757 "timeout_us": 0, 00:34:34.757 "transport_ack_timeout": 0, 00:34:34.757 "transport_retry_count": 4, 00:34:34.757 "transport_tos": 0 00:34:34.757 } 00:34:34.757 }, 00:34:34.757 { 00:34:34.757 "method": "bdev_nvme_attach_controller", 00:34:34.757 "params": { 00:34:34.757 "adrfam": "IPv4", 00:34:34.757 "ctrlr_loss_timeout_sec": 0, 00:34:34.757 "ddgst": false, 00:34:34.757 "fast_io_fail_timeout_sec": 0, 00:34:34.757 "hdgst": false, 00:34:34.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:34.757 "multipath": "multipath", 00:34:34.757 "name": "nvme0", 00:34:34.757 "prchk_guard": false, 00:34:34.757 "prchk_reftag": false, 00:34:34.757 "psk": "key0", 00:34:34.757 "reconnect_delay_sec": 0, 00:34:34.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:34.757 "traddr": "127.0.0.1", 00:34:34.757 "trsvcid": "4420", 00:34:34.757 "trtype": "TCP" 00:34:34.757 } 00:34:34.757 }, 00:34:34.757 { 00:34:34.757 "method": "bdev_nvme_set_hotplug", 00:34:34.757 "params": { 00:34:34.757 "enable": false, 00:34:34.757 "period_us": 100000 00:34:34.757 } 00:34:34.757 }, 00:34:34.757 { 00:34:34.757 "method": "bdev_wait_for_examine" 00:34:34.757 } 00:34:34.757 ] 00:34:34.757 }, 00:34:34.757 { 00:34:34.757 "subsystem": "nbd", 00:34:34.757 "config": [] 00:34:34.757 } 00:34:34.757 ] 00:34:34.757 }' 00:34:34.757 13:28:41 keyring_file -- keyring/file.sh@115 -- # killprocess 110632 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110632 ']' 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110632 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110632 00:34:34.757 killing process with pid 110632 00:34:34.757 Received shutdown signal, test time was about 1.000000 seconds 00:34:34.757 00:34:34.757 Latency(us) 00:34:34.757 [2024-12-06T13:28:41.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.757 [2024-12-06T13:28:41.285Z] =================================================================================================================== 00:34:34.757 [2024-12-06T13:28:41.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110632' 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@973 -- # kill 110632 00:34:34.757 13:28:41 keyring_file -- common/autotest_common.sh@978 -- # wait 110632 00:34:35.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:35.016 13:28:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=111103 00:34:35.016 13:28:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 111103 /var/tmp/bperf.sock 00:34:35.016 13:28:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111103 ']' 00:34:35.016 13:28:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:35.016 13:28:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.016 13:28:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:35.016 13:28:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.016 13:28:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:35.016 13:28:41 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:35.016 13:28:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:35.016 "subsystems": [ 00:34:35.016 { 00:34:35.016 "subsystem": "keyring", 00:34:35.016 "config": [ 00:34:35.016 { 00:34:35.016 "method": "keyring_file_add_key", 00:34:35.016 "params": { 00:34:35.016 "name": "key0", 00:34:35.016 "path": "/tmp/tmp.UtQWGVIAmk" 00:34:35.016 } 00:34:35.016 }, 00:34:35.016 { 00:34:35.016 "method": "keyring_file_add_key", 00:34:35.016 "params": { 00:34:35.016 "name": "key1", 00:34:35.016 "path": "/tmp/tmp.ptJ30v7H9p" 00:34:35.016 } 00:34:35.016 } 00:34:35.016 ] 00:34:35.016 }, 00:34:35.016 { 00:34:35.016 "subsystem": "iobuf", 00:34:35.016 "config": [ 00:34:35.016 { 00:34:35.016 "method": "iobuf_set_options", 00:34:35.016 "params": { 00:34:35.016 "enable_numa": false, 00:34:35.016 "large_bufsize": 135168, 00:34:35.016 "large_pool_count": 1024, 00:34:35.016 "small_bufsize": 8192, 00:34:35.016 "small_pool_count": 8192 00:34:35.016 } 00:34:35.016 } 00:34:35.016 ] 00:34:35.016 }, 00:34:35.016 { 00:34:35.016 "subsystem": "sock", 00:34:35.016 "config": [ 00:34:35.016 { 00:34:35.016 "method": "sock_set_default_impl", 00:34:35.016 "params": { 00:34:35.016 "impl_name": "posix" 00:34:35.016 } 00:34:35.016 }, 00:34:35.016 { 00:34:35.017 "method": "sock_impl_set_options", 00:34:35.017 "params": { 00:34:35.017 "enable_ktls": false, 00:34:35.017 "enable_placement_id": 0, 00:34:35.017 "enable_quickack": false, 00:34:35.017 "enable_recv_pipe": true, 00:34:35.017 "enable_zerocopy_send_client": false, 00:34:35.017 "enable_zerocopy_send_server": true, 00:34:35.017 "impl_name": "ssl", 00:34:35.017 "recv_buf_size": 4096, 00:34:35.017 "send_buf_size": 4096, 00:34:35.017 "tls_version": 0, 00:34:35.017 "zerocopy_threshold": 0 00:34:35.017 } 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "method": "sock_impl_set_options", 00:34:35.017 "params": { 00:34:35.017 "enable_ktls": false, 00:34:35.017 "enable_placement_id": 0, 00:34:35.017 "enable_quickack": false, 00:34:35.017 "enable_recv_pipe": true, 00:34:35.017 "enable_zerocopy_send_client": false, 00:34:35.017 "enable_zerocopy_send_server": true, 00:34:35.017 "impl_name": "posix", 00:34:35.017 "recv_buf_size": 2097152, 00:34:35.017 "send_buf_size": 2097152, 00:34:35.017 "tls_version": 0, 00:34:35.017 "zerocopy_threshold": 0 00:34:35.017 } 00:34:35.017 } 00:34:35.017 ] 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "subsystem": "vmd", 00:34:35.017 "config": [] 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "subsystem": "accel", 00:34:35.017 "config": [ 00:34:35.017 { 00:34:35.017 "method": "accel_set_options", 00:34:35.017 "params": { 00:34:35.017 "buf_count": 2048, 00:34:35.017 "large_cache_size": 16, 00:34:35.017 "sequence_count": 2048, 00:34:35.017 "small_cache_size": 128, 00:34:35.017 "task_count": 2048 00:34:35.017 } 00:34:35.017 } 00:34:35.017 ] 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "subsystem": "bdev", 00:34:35.017 "config": [ 00:34:35.017 { 00:34:35.017 "method": "bdev_set_options", 00:34:35.017 "params": { 00:34:35.017 "bdev_auto_examine": true, 00:34:35.017 "bdev_io_cache_size": 256, 00:34:35.017 "bdev_io_pool_size": 65535, 00:34:35.017 "iobuf_large_cache_size": 16, 00:34:35.017 "iobuf_small_cache_size": 128 00:34:35.017 } 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "method": "bdev_raid_set_options", 00:34:35.017 "params": { 00:34:35.017 "process_max_bandwidth_mb_sec": 0, 00:34:35.017 "process_window_size_kb": 1024 00:34:35.017 } 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "method": "bdev_iscsi_set_options", 00:34:35.017 "params": { 00:34:35.017 "timeout_sec": 30 00:34:35.017 } 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "method": "bdev_nvme_set_options", 00:34:35.017 "params": { 00:34:35.017 "action_on_timeout": "none", 00:34:35.017 "allow_accel_sequence": false, 00:34:35.017 "arbitration_burst": 0, 00:34:35.017 "bdev_retry_count": 3, 00:34:35.017 "ctrlr_loss_timeout_sec": 0, 00:34:35.017 "delay_cmd_submit": true, 00:34:35.017 "dhchap_dhgroups": [ 00:34:35.017 "null", 00:34:35.017 "ffdhe2048", 00:34:35.017 "ffdhe3072", 00:34:35.017 "ffdhe4096", 00:34:35.017 "ffdhe6144", 00:34:35.017 "ffdhe8192" 00:34:35.017 ], 00:34:35.017 "dhchap_digests": [ 00:34:35.017 "sha256", 00:34:35.017 "sha384", 00:34:35.017 "sha512" 00:34:35.017 ], 00:34:35.017 "disable_auto_failback": false, 00:34:35.017 "fast_io_fail_timeout_sec": 0, 00:34:35.017 "generate_uuids": false, 00:34:35.017 "high_priority_weight": 0, 00:34:35.017 "io_path_stat": false, 00:34:35.017 "io_queue_requests": 512, 00:34:35.017 "keep_alive_timeout_ms": 10000, 00:34:35.017 "low_priority_weight": 0, 00:34:35.017 "medium_priority_weight": 0, 00:34:35.017 "nvme_adminq_poll_period_us": 10000, 00:34:35.017 "nvme_error_stat": false, 00:34:35.017 "nvme_ioq_poll_period_us": 0, 00:34:35.017 "rdma_cm_event_timeout_ms": 0, 00:34:35.017 "rdma_max_cq_size": 0, 00:34:35.017 "rdma_srq_size": 0, 00:34:35.017 "reconnect_delay_sec": 0, 00:34:35.017 "timeout_admin_us": 0, 00:34:35.017 "timeout_us": 0, 00:34:35.017 "transport_ack_timeout": 0, 00:34:35.017 "transport_retry_count": 4, 00:34:35.017 "transport_tos": 0 00:34:35.017 } 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "method": "bdev_nvme_attach_controller", 00:34:35.017 "params": { 00:34:35.017 "adrfam": "IPv4", 00:34:35.017 "ctrlr_loss_timeout_sec": 0, 00:34:35.017 "ddgst": false, 00:34:35.017 "fast_io_fail_timeout_sec": 0, 00:34:35.017 "hdgst": false, 00:34:35.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.017 "multipath": "multipath", 00:34:35.017 "name": "nvme0", 00:34:35.017 "prchk_guard": false, 00:34:35.017 "prchk_reftag": false, 00:34:35.017 "psk": "key0", 00:34:35.017 "reconnect_delay_sec": 0, 00:34:35.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.017 "traddr": "127.0.0.1", 00:34:35.017 "trsvcid": "4420", 00:34:35.017 "trtype": "TCP" 00:34:35.017 } 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "method": "bdev_nvme_set_hotplug", 00:34:35.017 "params": { 00:34:35.017 "enable": false, 00:34:35.017 "period_us": 100000 00:34:35.017 } 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "method": "bdev_wait_for_examine" 00:34:35.017 } 00:34:35.017 ] 00:34:35.017 }, 00:34:35.017 { 00:34:35.017 "subsystem": "nbd", 00:34:35.017 "config": [] 00:34:35.017 } 00:34:35.017 ] 00:34:35.017 }' 00:34:35.017 [2024-12-06 13:28:41.455164] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:34:35.017 [2024-12-06 13:28:41.455499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111103 ] 00:34:35.276 [2024-12-06 13:28:41.608684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.276 [2024-12-06 13:28:41.650047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.535 [2024-12-06 13:28:41.810401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:36.103 13:28:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.103 13:28:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:36.103 13:28:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:36.103 13:28:42 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:36.103 13:28:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.361 13:28:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:36.361 13:28:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:36.361 13:28:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:36.361 13:28:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:36.361 13:28:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:36.361 13:28:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:36.361 13:28:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.929 13:28:43 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:36.929 13:28:43 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:36.929 13:28:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:36.929 13:28:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:36.929 13:28:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:36.929 13:28:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.929 13:28:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:37.188 13:28:43 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:37.188 13:28:43 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:37.188 13:28:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:37.188 13:28:43 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:37.446 13:28:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:37.446 13:28:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:37.446 13:28:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UtQWGVIAmk /tmp/tmp.ptJ30v7H9p 00:34:37.446 13:28:43 keyring_file -- keyring/file.sh@20 -- # killprocess 111103 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111103 ']' 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111103 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111103 00:34:37.446 killing process with pid 111103 00:34:37.446 Received shutdown signal, test time was about 1.000000 seconds 00:34:37.446 00:34:37.446 Latency(us) 00:34:37.446 [2024-12-06T13:28:43.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.446 [2024-12-06T13:28:43.974Z] =================================================================================================================== 00:34:37.446 [2024-12-06T13:28:43.974Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111103' 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@973 -- # kill 111103 00:34:37.446 13:28:43 keyring_file -- common/autotest_common.sh@978 -- # wait 111103 00:34:37.705 13:28:43 keyring_file -- keyring/file.sh@21 -- # killprocess 110609 00:34:37.705 13:28:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110609 ']' 00:34:37.705 13:28:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110609 00:34:37.705 13:28:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:37.705 13:28:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.705 13:28:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110609 00:34:37.705 killing process with pid 110609 00:34:37.705 13:28:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:37.705 13:28:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:37.705 13:28:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110609' 00:34:37.705 13:28:44 keyring_file -- common/autotest_common.sh@973 -- # kill 110609 00:34:37.705 13:28:44 keyring_file -- common/autotest_common.sh@978 -- # wait 110609 00:34:37.964 00:34:37.964 real 0m16.714s 00:34:37.964 user 0m43.910s 00:34:37.964 sys 0m3.053s 00:34:37.964 13:28:44 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.964 13:28:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:37.964 ************************************ 00:34:37.964 END TEST keyring_file 00:34:37.964 ************************************ 00:34:37.964 13:28:44 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:37.964 13:28:44 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:34:37.964 13:28:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:37.964 13:28:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.964 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:34:37.964 ************************************ 00:34:37.964 START TEST keyring_linux 00:34:37.964 ************************************ 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:34:37.964 Joined session keyring: 651596906 00:34:37.964 * Looking for test storage... 00:34:37.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:37.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.964 --rc genhtml_branch_coverage=1 00:34:37.964 --rc genhtml_function_coverage=1 00:34:37.964 --rc genhtml_legend=1 00:34:37.964 --rc geninfo_all_blocks=1 00:34:37.964 --rc geninfo_unexecuted_blocks=1 00:34:37.964 00:34:37.964 ' 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:37.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.964 --rc genhtml_branch_coverage=1 00:34:37.964 --rc genhtml_function_coverage=1 00:34:37.964 --rc genhtml_legend=1 00:34:37.964 --rc geninfo_all_blocks=1 00:34:37.964 --rc geninfo_unexecuted_blocks=1 00:34:37.964 00:34:37.964 ' 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:37.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.964 --rc genhtml_branch_coverage=1 00:34:37.964 --rc genhtml_function_coverage=1 00:34:37.964 --rc genhtml_legend=1 00:34:37.964 --rc geninfo_all_blocks=1 00:34:37.964 --rc geninfo_unexecuted_blocks=1 00:34:37.964 00:34:37.964 ' 00:34:37.964 13:28:44 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:37.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.964 --rc genhtml_branch_coverage=1 00:34:37.964 --rc genhtml_function_coverage=1 00:34:37.964 --rc genhtml_legend=1 00:34:37.964 --rc geninfo_all_blocks=1 00:34:37.964 --rc geninfo_unexecuted_blocks=1 00:34:37.964 00:34:37.964 ' 00:34:37.964 13:28:44 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:34:37.964 13:28:44 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=78bfff7e-a47d-4a1e-bf3a-724541d7caf2 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.964 13:28:44 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:37.964 13:28:44 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.223 13:28:44 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.223 13:28:44 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.223 13:28:44 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.223 13:28:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.223 13:28:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.223 13:28:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.223 13:28:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:38.223 13:28:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:38.223 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:38.223 13:28:44 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:38.223 13:28:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:38.223 13:28:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:38.223 13:28:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:38.223 13:28:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:38.223 13:28:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:38.223 13:28:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:38.223 13:28:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:38.223 13:28:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:38.223 13:28:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:38.223 13:28:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:38.223 13:28:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:38.224 /tmp/:spdk-test:key0 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:38.224 13:28:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:38.224 13:28:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:38.224 /tmp/:spdk-test:key1 00:34:38.224 13:28:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:38.224 13:28:44 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:38.224 13:28:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=111262 00:34:38.224 13:28:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 111262 00:34:38.224 13:28:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111262 ']' 00:34:38.224 13:28:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.224 13:28:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.224 13:28:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.224 13:28:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.224 13:28:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:38.224 [2024-12-06 13:28:44.648513] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:34:38.224 [2024-12-06 13:28:44.648607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111262 ] 00:34:38.483 [2024-12-06 13:28:44.794712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.483 [2024-12-06 13:28:44.844553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.750 13:28:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:38.750 13:28:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:38.750 13:28:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:38.750 13:28:45 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.750 13:28:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:38.750 [2024-12-06 13:28:45.043390] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.750 null0 00:34:38.750 [2024-12-06 13:28:45.075354] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:38.750 [2024-12-06 13:28:45.075545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:38.750 13:28:45 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.750 13:28:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:38.750 413695366 00:34:38.750 13:28:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:38.750 329701107 00:34:38.751 13:28:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=111286 00:34:38.751 13:28:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 111286 /var/tmp/bperf.sock 00:34:38.751 13:28:45 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:38.751 13:28:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111286 ']' 00:34:38.751 13:28:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:38.751 13:28:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:38.751 13:28:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:38.751 13:28:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.751 13:28:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:38.751 [2024-12-06 13:28:45.151386] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:34:38.751 [2024-12-06 13:28:45.151469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111286 ] 00:34:39.012 [2024-12-06 13:28:45.299909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.012 [2024-12-06 13:28:45.339524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.012 13:28:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.012 13:28:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:39.012 13:28:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:39.012 13:28:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:39.270 13:28:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:39.270 13:28:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:39.838 13:28:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:39.838 13:28:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:40.096 [2024-12-06 13:28:46.401433] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:40.096 nvme0n1 00:34:40.096 13:28:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:40.096 13:28:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:40.096 13:28:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:40.096 13:28:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:40.096 13:28:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.096 13:28:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:40.355 13:28:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:40.355 13:28:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:40.355 13:28:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:40.355 13:28:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:40.355 13:28:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.355 13:28:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.355 13:28:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:40.613 13:28:47 keyring_linux -- keyring/linux.sh@25 -- # sn=413695366 00:34:40.613 13:28:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:40.613 13:28:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:40.613 13:28:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 413695366 == \4\1\3\6\9\5\3\6\6 ]] 00:34:40.613 13:28:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 413695366 00:34:40.613 13:28:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:40.613 13:28:47 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:40.872 Running I/O for 1 seconds... 00:34:41.808 11852.00 IOPS, 46.30 MiB/s 00:34:41.808 Latency(us) 00:34:41.808 [2024-12-06T13:28:48.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.808 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:41.808 nvme0n1 : 1.01 11856.08 46.31 0.00 0.00 10734.66 8698.41 20018.27 00:34:41.808 [2024-12-06T13:28:48.336Z] =================================================================================================================== 00:34:41.808 [2024-12-06T13:28:48.336Z] Total : 11856.08 46.31 0.00 0.00 10734.66 8698.41 20018.27 00:34:41.808 { 00:34:41.808 "results": [ 00:34:41.808 { 00:34:41.808 "job": "nvme0n1", 00:34:41.808 "core_mask": "0x2", 00:34:41.808 "workload": "randread", 00:34:41.808 "status": "finished", 00:34:41.808 "queue_depth": 128, 00:34:41.808 "io_size": 4096, 00:34:41.808 "runtime": 1.010621, 00:34:41.808 "iops": 11856.076610321772, 00:34:41.808 "mibps": 46.31279925906942, 00:34:41.808 "io_failed": 0, 00:34:41.808 "io_timeout": 0, 00:34:41.808 "avg_latency_us": 10734.663741066144, 00:34:41.808 "min_latency_us": 8698.414545454545, 00:34:41.808 "max_latency_us": 20018.269090909092 00:34:41.808 } 00:34:41.808 ], 00:34:41.808 "core_count": 1 00:34:41.808 } 00:34:41.808 13:28:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:41.808 13:28:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:42.376 13:28:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:42.376 13:28:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:42.376 13:28:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:42.376 13:28:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:42.376 13:28:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:42.376 13:28:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:42.634 13:28:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:42.634 13:28:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:42.634 13:28:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:42.634 13:28:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.634 13:28:48 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:42.634 13:28:48 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.634 13:28:48 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:42.634 13:28:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.634 13:28:48 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:42.634 13:28:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.634 13:28:48 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.634 13:28:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:42.893 [2024-12-06 13:28:49.230995] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:42.893 [2024-12-06 13:28:49.231627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc5270 (107): Transport endpoint is not connected 00:34:42.893 [2024-12-06 13:28:49.232614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc5270 (9): Bad file descriptor 00:34:42.893 [2024-12-06 13:28:49.233610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:42.893 [2024-12-06 13:28:49.233655] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:42.893 [2024-12-06 13:28:49.233667] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:42.893 [2024-12-06 13:28:49.233678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:42.893 2024/12/06 13:28:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:42.893 request: 00:34:42.893 { 00:34:42.893 "method": "bdev_nvme_attach_controller", 00:34:42.893 "params": { 00:34:42.893 "name": "nvme0", 00:34:42.893 "trtype": "tcp", 00:34:42.893 "traddr": "127.0.0.1", 00:34:42.893 "adrfam": "ipv4", 00:34:42.893 "trsvcid": "4420", 00:34:42.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.893 "prchk_reftag": false, 00:34:42.893 "prchk_guard": false, 00:34:42.893 "hdgst": false, 00:34:42.893 "ddgst": false, 00:34:42.893 "psk": ":spdk-test:key1", 00:34:42.893 "allow_unrecognized_csi": false 00:34:42.894 } 00:34:42.894 } 00:34:42.894 Got JSON-RPC error response 00:34:42.894 GoRPCClient: error on JSON-RPC call 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@33 -- # sn=413695366 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 413695366 00:34:42.894 1 links removed 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@33 -- # sn=329701107 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 329701107 00:34:42.894 1 links removed 00:34:42.894 13:28:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 111286 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111286 ']' 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111286 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111286 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111286' 00:34:42.894 killing process with pid 111286 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 111286 00:34:42.894 Received shutdown signal, test time was about 1.000000 seconds 00:34:42.894 00:34:42.894 Latency(us) 00:34:42.894 [2024-12-06T13:28:49.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.894 [2024-12-06T13:28:49.422Z] =================================================================================================================== 00:34:42.894 [2024-12-06T13:28:49.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.894 13:28:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 111286 00:34:43.153 13:28:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 111262 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111262 ']' 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111262 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111262 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111262' 00:34:43.153 killing process with pid 111262 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 111262 00:34:43.153 13:28:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 111262 00:34:43.412 00:34:43.412 real 0m5.413s 00:34:43.412 user 0m11.412s 00:34:43.412 sys 0m1.427s 00:34:43.412 ************************************ 00:34:43.412 END TEST keyring_linux 00:34:43.412 ************************************ 00:34:43.412 13:28:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.412 13:28:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:43.412 13:28:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:43.412 13:28:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:43.412 13:28:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:43.412 13:28:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:43.412 13:28:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:43.412 13:28:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:43.412 13:28:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:43.412 13:28:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:43.412 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:34:43.412 13:28:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:43.412 13:28:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:43.412 13:28:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:43.412 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:34:45.312 INFO: APP EXITING 00:34:45.312 INFO: killing all VMs 00:34:45.312 INFO: killing vhost app 00:34:45.312 INFO: EXIT DONE 00:34:45.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:45.878 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:45.878 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:46.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:46.816 Cleaning 00:34:46.816 Removing: /var/run/dpdk/spdk0/config 00:34:46.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:46.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:46.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:46.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:46.816 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:46.816 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:46.816 Removing: /var/run/dpdk/spdk1/config 00:34:46.816 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:46.816 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:46.816 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:46.816 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:46.816 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:46.816 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:46.816 Removing: /var/run/dpdk/spdk2/config 00:34:46.816 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:46.816 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:46.816 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:46.816 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:46.816 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:46.816 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:46.816 Removing: /var/run/dpdk/spdk3/config 00:34:46.816 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:46.816 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:46.816 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:46.816 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:46.816 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:46.816 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:46.816 Removing: /var/run/dpdk/spdk4/config 00:34:46.816 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:46.816 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:46.816 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:46.816 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:46.816 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:46.816 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:46.816 Removing: /dev/shm/nvmf_trace.0 00:34:46.816 Removing: /dev/shm/spdk_tgt_trace.pid59059 00:34:46.816 Removing: /var/run/dpdk/spdk0 00:34:46.816 Removing: /var/run/dpdk/spdk1 00:34:46.816 Removing: /var/run/dpdk/spdk2 00:34:46.816 Removing: /var/run/dpdk/spdk3 00:34:46.816 Removing: /var/run/dpdk/spdk4 00:34:46.816 Removing: /var/run/dpdk/spdk_pid101119 00:34:46.816 Removing: /var/run/dpdk/spdk_pid101165 00:34:46.816 Removing: /var/run/dpdk/spdk_pid101513 00:34:46.816 Removing: /var/run/dpdk/spdk_pid101555 00:34:46.816 Removing: /var/run/dpdk/spdk_pid101949 00:34:46.816 Removing: /var/run/dpdk/spdk_pid102512 00:34:46.816 Removing: /var/run/dpdk/spdk_pid102941 00:34:46.816 Removing: /var/run/dpdk/spdk_pid103925 00:34:46.816 Removing: /var/run/dpdk/spdk_pid105007 00:34:46.816 Removing: /var/run/dpdk/spdk_pid105114 00:34:46.816 Removing: /var/run/dpdk/spdk_pid105171 00:34:46.816 Removing: /var/run/dpdk/spdk_pid106763 00:34:46.816 Removing: /var/run/dpdk/spdk_pid107086 00:34:46.816 Removing: /var/run/dpdk/spdk_pid107417 00:34:46.816 Removing: /var/run/dpdk/spdk_pid107964 00:34:46.816 Removing: /var/run/dpdk/spdk_pid107969 00:34:46.816 Removing: /var/run/dpdk/spdk_pid108368 00:34:46.816 Removing: /var/run/dpdk/spdk_pid108523 00:34:46.816 Removing: /var/run/dpdk/spdk_pid108675 00:34:46.816 Removing: /var/run/dpdk/spdk_pid108772 00:34:46.816 Removing: /var/run/dpdk/spdk_pid108922 00:34:46.816 Removing: /var/run/dpdk/spdk_pid109026 00:34:46.816 Removing: /var/run/dpdk/spdk_pid109755 00:34:46.816 Removing: /var/run/dpdk/spdk_pid109790 00:34:46.816 Removing: /var/run/dpdk/spdk_pid109826 00:34:46.816 Removing: /var/run/dpdk/spdk_pid110080 00:34:46.816 Removing: /var/run/dpdk/spdk_pid110111 00:34:46.816 Removing: /var/run/dpdk/spdk_pid110141 00:34:46.816 Removing: /var/run/dpdk/spdk_pid110609 00:34:46.816 Removing: /var/run/dpdk/spdk_pid110632 00:34:46.816 Removing: /var/run/dpdk/spdk_pid111103 00:34:46.816 Removing: /var/run/dpdk/spdk_pid111262 00:34:46.816 Removing: /var/run/dpdk/spdk_pid111286 00:34:46.816 Removing: /var/run/dpdk/spdk_pid58911 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59059 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59314 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59402 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59427 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59537 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59553 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59687 00:34:46.816 Removing: /var/run/dpdk/spdk_pid59978 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60156 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60241 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60333 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60417 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60450 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60480 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60555 00:34:46.816 Removing: /var/run/dpdk/spdk_pid60631 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61276 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61321 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61384 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61399 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61480 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61489 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61554 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61582 00:34:46.816 Removing: /var/run/dpdk/spdk_pid61634 00:34:47.075 Removing: /var/run/dpdk/spdk_pid61646 00:34:47.075 Removing: /var/run/dpdk/spdk_pid61696 00:34:47.075 Removing: /var/run/dpdk/spdk_pid61713 00:34:47.075 Removing: /var/run/dpdk/spdk_pid61873 00:34:47.075 Removing: /var/run/dpdk/spdk_pid61903 00:34:47.075 Removing: /var/run/dpdk/spdk_pid61984 00:34:47.075 Removing: /var/run/dpdk/spdk_pid62452 00:34:47.075 Removing: /var/run/dpdk/spdk_pid62814 00:34:47.075 Removing: /var/run/dpdk/spdk_pid65288 00:34:47.075 Removing: /var/run/dpdk/spdk_pid65339 00:34:47.075 Removing: /var/run/dpdk/spdk_pid65700 00:34:47.075 Removing: /var/run/dpdk/spdk_pid65741 00:34:47.075 Removing: /var/run/dpdk/spdk_pid66146 00:34:47.075 Removing: /var/run/dpdk/spdk_pid66724 00:34:47.075 Removing: /var/run/dpdk/spdk_pid67169 00:34:47.075 Removing: /var/run/dpdk/spdk_pid68174 00:34:47.075 Removing: /var/run/dpdk/spdk_pid69225 00:34:47.075 Removing: /var/run/dpdk/spdk_pid69348 00:34:47.075 Removing: /var/run/dpdk/spdk_pid69411 00:34:47.075 Removing: /var/run/dpdk/spdk_pid71038 00:34:47.075 Removing: /var/run/dpdk/spdk_pid71372 00:34:47.075 Removing: /var/run/dpdk/spdk_pid75207 00:34:47.075 Removing: /var/run/dpdk/spdk_pid75616 00:34:47.075 Removing: /var/run/dpdk/spdk_pid76232 00:34:47.075 Removing: /var/run/dpdk/spdk_pid76763 00:34:47.075 Removing: /var/run/dpdk/spdk_pid82666 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83165 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83274 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83420 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83467 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83504 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83556 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83709 00:34:47.075 Removing: /var/run/dpdk/spdk_pid83856 00:34:47.075 Removing: /var/run/dpdk/spdk_pid84118 00:34:47.075 Removing: /var/run/dpdk/spdk_pid84235 00:34:47.075 Removing: /var/run/dpdk/spdk_pid84483 00:34:47.075 Removing: /var/run/dpdk/spdk_pid84581 00:34:47.075 Removing: /var/run/dpdk/spdk_pid84701 00:34:47.075 Removing: /var/run/dpdk/spdk_pid85088 00:34:47.075 Removing: /var/run/dpdk/spdk_pid85541 00:34:47.075 Removing: /var/run/dpdk/spdk_pid85542 00:34:47.075 Removing: /var/run/dpdk/spdk_pid85544 00:34:47.075 Removing: /var/run/dpdk/spdk_pid85817 00:34:47.075 Removing: /var/run/dpdk/spdk_pid86082 00:34:47.075 Removing: /var/run/dpdk/spdk_pid86486 00:34:47.075 Removing: /var/run/dpdk/spdk_pid86823 00:34:47.075 Removing: /var/run/dpdk/spdk_pid87406 00:34:47.075 Removing: /var/run/dpdk/spdk_pid87408 00:34:47.075 Removing: /var/run/dpdk/spdk_pid87785 00:34:47.075 Removing: /var/run/dpdk/spdk_pid87805 00:34:47.075 Removing: /var/run/dpdk/spdk_pid87819 00:34:47.075 Removing: /var/run/dpdk/spdk_pid87850 00:34:47.075 Removing: /var/run/dpdk/spdk_pid87857 00:34:47.075 Removing: /var/run/dpdk/spdk_pid88254 00:34:47.075 Removing: /var/run/dpdk/spdk_pid88297 00:34:47.075 Removing: /var/run/dpdk/spdk_pid88691 00:34:47.075 Removing: /var/run/dpdk/spdk_pid88923 00:34:47.075 Removing: /var/run/dpdk/spdk_pid89447 00:34:47.076 Removing: /var/run/dpdk/spdk_pid90052 00:34:47.076 Removing: /var/run/dpdk/spdk_pid91476 00:34:47.076 Removing: /var/run/dpdk/spdk_pid92125 00:34:47.076 Removing: /var/run/dpdk/spdk_pid92127 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94185 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94262 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94339 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94430 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94555 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94632 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94703 00:34:47.076 Removing: /var/run/dpdk/spdk_pid94780 00:34:47.076 Removing: /var/run/dpdk/spdk_pid95151 00:34:47.076 Removing: /var/run/dpdk/spdk_pid95904 00:34:47.076 Removing: /var/run/dpdk/spdk_pid97296 00:34:47.076 Removing: /var/run/dpdk/spdk_pid97496 00:34:47.076 Removing: /var/run/dpdk/spdk_pid97775 00:34:47.076 Removing: /var/run/dpdk/spdk_pid98320 00:34:47.076 Removing: /var/run/dpdk/spdk_pid98681 00:34:47.076 Clean 00:34:47.334 13:28:53 -- common/autotest_common.sh@1453 -- # return 0 00:34:47.334 13:28:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:47.334 13:28:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.334 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:34:47.334 13:28:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:47.334 13:28:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.334 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:34:47.334 13:28:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:47.334 13:28:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:47.334 13:28:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:47.334 13:28:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:47.334 13:28:53 -- spdk/autotest.sh@398 -- # hostname 00:34:47.334 13:28:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:47.592 geninfo: WARNING: invalid characters removed from testname! 00:35:19.670 13:29:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:20.237 13:29:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:23.520 13:29:29 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:26.804 13:29:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:29.341 13:29:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:32.629 13:29:38 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:35.159 13:29:41 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:35.159 13:29:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:35.159 13:29:41 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:35.159 13:29:41 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:35.159 13:29:41 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:35.159 13:29:41 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:35.418 + [[ -n 5265 ]] 00:35:35.418 + sudo kill 5265 00:35:35.426 [Pipeline] } 00:35:35.443 [Pipeline] // timeout 00:35:35.450 [Pipeline] } 00:35:35.465 [Pipeline] // stage 00:35:35.470 [Pipeline] } 00:35:35.487 [Pipeline] // catchError 00:35:35.496 [Pipeline] stage 00:35:35.498 [Pipeline] { (Stop VM) 00:35:35.513 [Pipeline] sh 00:35:35.871 + vagrant halt 00:35:40.056 ==> default: Halting domain... 00:35:45.338 [Pipeline] sh 00:35:45.618 + vagrant destroy -f 00:35:49.808 ==> default: Removing domain... 00:35:50.079 [Pipeline] sh 00:35:50.360 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:35:50.370 [Pipeline] } 00:35:50.387 [Pipeline] // stage 00:35:50.394 [Pipeline] } 00:35:50.409 [Pipeline] // dir 00:35:50.414 [Pipeline] } 00:35:50.429 [Pipeline] // wrap 00:35:50.437 [Pipeline] } 00:35:50.451 [Pipeline] // catchError 00:35:50.463 [Pipeline] stage 00:35:50.466 [Pipeline] { (Epilogue) 00:35:50.479 [Pipeline] sh 00:35:50.760 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:58.888 [Pipeline] catchError 00:35:58.889 [Pipeline] { 00:35:58.898 [Pipeline] sh 00:35:59.173 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:59.431 Artifacts sizes are good 00:35:59.440 [Pipeline] } 00:35:59.457 [Pipeline] // catchError 00:35:59.469 [Pipeline] archiveArtifacts 00:35:59.475 Archiving artifacts 00:35:59.616 [Pipeline] cleanWs 00:35:59.633 [WS-CLEANUP] Deleting project workspace... 00:35:59.633 [WS-CLEANUP] Deferred wipeout is used... 00:35:59.681 [WS-CLEANUP] done 00:35:59.684 [Pipeline] } 00:35:59.705 [Pipeline] // stage 00:35:59.711 [Pipeline] } 00:35:59.728 [Pipeline] // node 00:35:59.734 [Pipeline] End of Pipeline 00:35:59.784 Finished: SUCCESS